00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2378 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3643 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.098 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.099 The recommended git tool is: git 00:00:00.099 using credential 00000000-0000-0000-0000-000000000002 00:00:00.100 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.129 Fetching changes from the remote Git repository 00:00:00.131 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.165 Using shallow fetch with depth 1 00:00:00.165 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.165 > git --version # timeout=10 00:00:00.209 > git --version # 'git version 2.39.2' 00:00:00.209 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.242 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.242 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.882 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.894 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.908 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.908 > git config core.sparsecheckout # timeout=10 00:00:04.918 > git read-tree -mu HEAD # timeout=10 00:00:04.935 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.950 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.950 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.044 [Pipeline] Start of Pipeline 00:00:05.057 [Pipeline] library 00:00:05.059 Loading library shm_lib@master 00:00:05.059 Library shm_lib@master is cached. Copying from home. 00:00:05.075 [Pipeline] node 00:00:05.102 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.104 [Pipeline] { 00:00:05.115 [Pipeline] catchError 00:00:05.117 [Pipeline] { 00:00:05.133 [Pipeline] wrap 00:00:05.143 [Pipeline] { 00:00:05.152 [Pipeline] stage 00:00:05.154 [Pipeline] { (Prologue) 00:00:05.374 [Pipeline] sh 00:00:06.244 + logger -p user.info -t JENKINS-CI 00:00:06.279 [Pipeline] echo 00:00:06.281 Node: GP11 00:00:06.288 [Pipeline] sh 00:00:06.656 [Pipeline] setCustomBuildProperty 00:00:06.666 [Pipeline] echo 00:00:06.667 Cleanup processes 00:00:06.671 [Pipeline] sh 00:00:06.963 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.963 4685 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.978 [Pipeline] sh 00:00:07.271 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.271 ++ grep -v 'sudo pgrep' 00:00:07.271 ++ awk '{print $1}' 00:00:07.271 + sudo kill -9 00:00:07.271 + true 00:00:07.289 [Pipeline] cleanWs 00:00:07.302 [WS-CLEANUP] Deleting project workspace... 00:00:07.302 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.320 [WS-CLEANUP] done 00:00:07.324 [Pipeline] setCustomBuildProperty 00:00:07.338 [Pipeline] sh 00:00:07.645 + sudo git config --global --replace-all safe.directory '*' 00:00:07.743 [Pipeline] httpRequest 00:00:09.572 [Pipeline] echo 00:00:09.574 Sorcerer 10.211.164.20 is alive 00:00:09.582 [Pipeline] retry 00:00:09.584 [Pipeline] { 00:00:09.596 [Pipeline] httpRequest 00:00:09.601 HttpMethod: GET 00:00:09.601 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.603 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.607 Response Code: HTTP/1.1 200 OK 00:00:09.608 Success: Status code 200 is in the accepted range: 200,404 00:00:09.608 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.616 [Pipeline] } 00:00:10.636 [Pipeline] // retry 00:00:10.643 [Pipeline] sh 00:00:10.931 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.950 [Pipeline] httpRequest 00:00:11.316 [Pipeline] echo 00:00:11.318 Sorcerer 10.211.164.20 is alive 00:00:11.327 [Pipeline] retry 00:00:11.330 [Pipeline] { 00:00:11.344 [Pipeline] httpRequest 00:00:11.349 HttpMethod: GET 00:00:11.350 URL: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:11.351 Sending request to url: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:11.380 Response Code: HTTP/1.1 200 OK 00:00:11.380 Success: Status code 200 is in the accepted range: 200,404 00:00:11.381 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:38.500 [Pipeline] } 00:01:38.522 [Pipeline] // retry 00:01:38.533 [Pipeline] sh 00:01:38.838 + tar --no-same-owner -xf spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:41.416 [Pipeline] sh 00:01:41.716 + git -C spdk log --oneline -n5 00:01:41.716 d47eb51c9 bdev: fix a race between reset start and complete 00:01:41.716 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:41.716 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:41.716 4bcab9fb9 correct kick for CQ full case 00:01:41.716 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:41.737 [Pipeline] withCredentials 00:01:41.752 > git --version # timeout=10 00:01:41.764 > git --version # 'git version 2.39.2' 00:01:41.796 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:41.798 [Pipeline] { 00:01:41.807 [Pipeline] retry 00:01:41.809 [Pipeline] { 00:01:41.824 [Pipeline] sh 00:01:42.326 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:42.602 [Pipeline] } 00:01:42.620 [Pipeline] // retry 00:01:42.625 [Pipeline] } 00:01:42.641 [Pipeline] // withCredentials 00:01:42.650 [Pipeline] httpRequest 00:01:43.067 [Pipeline] echo 00:01:43.068 Sorcerer 10.211.164.20 is alive 00:01:43.078 [Pipeline] retry 00:01:43.080 [Pipeline] { 00:01:43.092 [Pipeline] httpRequest 00:01:43.098 HttpMethod: GET 00:01:43.098 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:43.099 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:43.123 Response Code: HTTP/1.1 200 OK 00:01:43.124 Success: Status code 200 is in the accepted range: 200,404 00:01:43.124 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:03.849 [Pipeline] } 00:02:03.866 [Pipeline] // retry 00:02:03.873 [Pipeline] sh 00:02:04.158 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:06.080 [Pipeline] sh 00:02:06.371 + git -C dpdk log --oneline -n5 00:02:06.371 caf0f5d395 version: 22.11.4 00:02:06.371 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:06.371 dc9c799c7d vhost: fix missing spinlock unlock 00:02:06.371 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:06.371 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:06.383 [Pipeline] } 00:02:06.396 [Pipeline] // stage 00:02:06.405 [Pipeline] stage 00:02:06.407 [Pipeline] { (Prepare) 00:02:06.427 [Pipeline] writeFile 00:02:06.442 [Pipeline] sh 00:02:06.731 + logger -p user.info -t JENKINS-CI 00:02:06.742 [Pipeline] sh 00:02:07.027 + logger -p user.info -t JENKINS-CI 00:02:07.041 [Pipeline] sh 00:02:07.323 + cat autorun-spdk.conf 00:02:07.323 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.323 SPDK_TEST_NVMF=1 00:02:07.323 SPDK_TEST_NVME_CLI=1 00:02:07.323 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.323 SPDK_TEST_NVMF_NICS=e810 00:02:07.323 SPDK_TEST_VFIOUSER=1 00:02:07.323 SPDK_RUN_UBSAN=1 00:02:07.323 NET_TYPE=phy 00:02:07.323 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:07.323 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.332 RUN_NIGHTLY=1 00:02:07.338 [Pipeline] readFile 00:02:07.394 [Pipeline] withEnv 00:02:07.396 [Pipeline] { 00:02:07.404 [Pipeline] sh 00:02:07.691 + set -ex 00:02:07.691 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:07.691 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:07.691 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.691 ++ SPDK_TEST_NVMF=1 00:02:07.691 ++ SPDK_TEST_NVME_CLI=1 00:02:07.691 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.691 ++ SPDK_TEST_NVMF_NICS=e810 00:02:07.691 ++ SPDK_TEST_VFIOUSER=1 00:02:07.691 ++ SPDK_RUN_UBSAN=1 00:02:07.691 ++ NET_TYPE=phy 00:02:07.691 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:07.691 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.691 ++ RUN_NIGHTLY=1 00:02:07.691 + case $SPDK_TEST_NVMF_NICS in 00:02:07.691 + DRIVERS=ice 00:02:07.691 + [[ tcp == \r\d\m\a ]] 00:02:07.691 + [[ -n ice ]] 00:02:07.691 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:07.691 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:11.001 rmmod: ERROR: Module irdma is not currently loaded 00:02:11.001 rmmod: ERROR: Module i40iw is not currently loaded 00:02:11.001 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:11.001 + true 00:02:11.001 + for D in $DRIVERS 00:02:11.001 + sudo modprobe ice 00:02:11.001 + exit 0 00:02:11.013 [Pipeline] } 00:02:11.027 [Pipeline] // withEnv 00:02:11.031 [Pipeline] } 00:02:11.042 [Pipeline] // stage 00:02:11.051 [Pipeline] catchError 00:02:11.052 [Pipeline] { 00:02:11.065 [Pipeline] timeout 00:02:11.065 Timeout set to expire in 1 hr 0 min 00:02:11.067 [Pipeline] { 00:02:11.079 [Pipeline] stage 00:02:11.081 [Pipeline] { (Tests) 00:02:11.094 [Pipeline] sh 00:02:11.385 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.385 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.385 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.385 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:11.385 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.385 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:11.385 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:11.385 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:11.385 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:11.385 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:11.385 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:11.385 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.385 + source /etc/os-release 00:02:11.386 ++ NAME='Fedora Linux' 00:02:11.386 ++ VERSION='39 (Cloud Edition)' 00:02:11.386 ++ ID=fedora 00:02:11.386 ++ VERSION_ID=39 00:02:11.386 ++ VERSION_CODENAME= 00:02:11.386 ++ PLATFORM_ID=platform:f39 00:02:11.386 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:11.386 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:11.386 ++ LOGO=fedora-logo-icon 00:02:11.386 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:11.386 ++ HOME_URL=https://fedoraproject.org/ 00:02:11.386 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:11.386 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:11.386 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:11.386 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:11.386 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:11.386 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:11.386 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:11.386 ++ SUPPORT_END=2024-11-12 00:02:11.386 ++ VARIANT='Cloud Edition' 00:02:11.386 ++ VARIANT_ID=cloud 00:02:11.386 + uname -a 00:02:11.386 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:11.386 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:12.327 Hugepages 00:02:12.327 node hugesize free / total 00:02:12.327 node0 1048576kB 0 / 0 00:02:12.327 node0 2048kB 0 / 0 00:02:12.327 node1 1048576kB 0 / 0 00:02:12.327 node1 2048kB 0 / 0 00:02:12.327 00:02:12.327 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:12.327 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:12.327 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:12.327 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:12.327 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:12.327 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:12.327 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:12.327 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:12.327 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:12.327 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:12.327 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:12.327 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:12.327 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:12.327 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:12.327 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:12.327 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:12.327 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:12.327 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:12.327 + rm -f /tmp/spdk-ld-path 00:02:12.327 + source autorun-spdk.conf 00:02:12.327 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.327 ++ SPDK_TEST_NVMF=1 00:02:12.327 ++ SPDK_TEST_NVME_CLI=1 00:02:12.327 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:12.327 ++ SPDK_TEST_NVMF_NICS=e810 00:02:12.327 ++ SPDK_TEST_VFIOUSER=1 00:02:12.327 ++ SPDK_RUN_UBSAN=1 00:02:12.327 ++ NET_TYPE=phy 00:02:12.327 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:12.327 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:12.327 ++ RUN_NIGHTLY=1 00:02:12.327 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:12.327 + [[ -n '' ]] 00:02:12.327 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.588 + for M in /var/spdk/build-*-manifest.txt 00:02:12.588 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:12.588 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:12.588 + for M in /var/spdk/build-*-manifest.txt 00:02:12.588 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:12.588 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:12.588 + for M in /var/spdk/build-*-manifest.txt 00:02:12.588 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:12.588 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:12.588 ++ uname 00:02:12.588 + [[ Linux == \L\i\n\u\x ]] 00:02:12.588 + sudo dmesg -T 00:02:12.588 + sudo dmesg --clear 00:02:12.588 + dmesg_pid=6053 00:02:12.588 + [[ Fedora Linux == FreeBSD ]] 00:02:12.588 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:12.588 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:12.588 + sudo dmesg -Tw 00:02:12.588 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:12.588 + [[ -x /usr/src/fio-static/fio ]] 00:02:12.588 + export FIO_BIN=/usr/src/fio-static/fio 00:02:12.588 + FIO_BIN=/usr/src/fio-static/fio 00:02:12.588 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:12.588 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:12.588 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:12.588 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:12.588 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:12.588 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:12.588 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:12.588 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:12.588 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:12.588 20:03:24 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:12.588 20:03:24 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:12.588 20:03:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.588 20:03:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:12.588 20:03:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:12.588 20:03:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:12.588 20:03:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:12.588 20:03:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:12.588 20:03:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:12.588 20:03:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:12.588 20:03:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:12.588 20:03:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:12.588 20:03:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:12.588 20:03:24 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:12.588 20:03:24 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:12.588 20:03:24 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:12.588 20:03:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:12.588 20:03:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:12.588 20:03:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:12.588 20:03:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:12.588 20:03:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:12.588 20:03:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.588 20:03:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.588 20:03:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.588 20:03:24 -- paths/export.sh@5 -- $ export PATH 00:02:12.588 20:03:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.588 20:03:24 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:12.588 20:03:24 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:12.588 20:03:24 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731956604.XXXXXX 00:02:12.588 20:03:24 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731956604.0SONTg 00:02:12.588 20:03:24 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:12.588 20:03:24 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:02:12.588 20:03:24 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:12.588 20:03:24 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:12.588 20:03:24 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:12.588 20:03:24 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:12.588 20:03:24 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:12.588 20:03:24 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:12.588 20:03:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.588 20:03:24 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:12.588 20:03:24 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:12.588 20:03:24 -- pm/common@17 -- $ local monitor 00:02:12.588 20:03:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.588 20:03:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.588 20:03:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.588 20:03:24 -- pm/common@21 -- $ date +%s 00:02:12.588 20:03:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.588 20:03:24 -- pm/common@21 -- $ date +%s 00:02:12.588 20:03:24 -- pm/common@25 -- $ sleep 1 00:02:12.588 20:03:24 -- pm/common@21 -- $ date +%s 00:02:12.588 20:03:24 -- pm/common@21 -- $ date +%s 00:02:12.588 20:03:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731956604 00:02:12.588 20:03:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731956604 00:02:12.588 20:03:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731956604 00:02:12.588 20:03:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731956604 00:02:12.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731956604_collect-vmstat.pm.log 00:02:12.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731956604_collect-cpu-load.pm.log 00:02:12.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731956604_collect-cpu-temp.pm.log 00:02:12.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731956604_collect-bmc-pm.bmc.pm.log 00:02:13.531 20:03:25 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:13.531 20:03:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:13.531 20:03:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:13.531 20:03:25 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.531 20:03:25 -- spdk/autobuild.sh@16 -- $ date -u 00:02:13.531 Mon Nov 18 07:03:25 PM UTC 2024 00:02:13.531 20:03:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:13.791 v25.01-pre-190-gd47eb51c9 00:02:13.791 20:03:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:13.791 20:03:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:13.791 20:03:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:13.791 20:03:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:13.791 20:03:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:13.791 20:03:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.791 ************************************ 00:02:13.791 START TEST ubsan 00:02:13.791 ************************************ 00:02:13.791 20:03:25 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:13.791 using ubsan 00:02:13.791 00:02:13.791 real 0m0.000s 00:02:13.791 user 0m0.000s 00:02:13.791 sys 0m0.000s 00:02:13.791 20:03:25 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:13.791 20:03:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:13.791 ************************************ 00:02:13.791 END TEST ubsan 00:02:13.791 ************************************ 00:02:13.791 20:03:25 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:13.791 20:03:25 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:13.791 20:03:25 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:13.791 20:03:25 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:13.791 20:03:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:13.791 20:03:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.791 ************************************ 00:02:13.791 START TEST build_native_dpdk 00:02:13.791 ************************************ 00:02:13.791 20:03:25 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:13.791 caf0f5d395 version: 22.11.4 00:02:13.791 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:13.791 dc9c799c7d vhost: fix missing spinlock unlock 00:02:13.791 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:13.791 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:13.791 20:03:25 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:13.792 patching file config/rte_config.h 00:02:13.792 Hunk #1 succeeded at 60 (offset 1 line). 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:13.792 patching file lib/pcapng/rte_pcapng.c 00:02:13.792 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:13.792 20:03:25 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:13.792 20:03:25 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:20.361 The Meson build system 00:02:20.361 Version: 1.5.0 00:02:20.362 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:20.362 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:20.362 Build type: native build 00:02:20.362 Program cat found: YES (/usr/bin/cat) 00:02:20.362 Project name: DPDK 00:02:20.362 Project version: 22.11.4 00:02:20.362 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:20.362 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:20.362 Host machine cpu family: x86_64 00:02:20.362 Host machine cpu: x86_64 00:02:20.362 Message: ## Building in Developer Mode ## 00:02:20.362 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:20.362 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:20.362 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:20.362 Program objdump found: YES (/usr/bin/objdump) 00:02:20.362 Program python3 found: YES (/usr/bin/python3) 00:02:20.362 Program cat found: YES (/usr/bin/cat) 00:02:20.362 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:20.362 Checking for size of "void *" : 8 00:02:20.362 Checking for size of "void *" : 8 (cached) 00:02:20.362 Library m found: YES 00:02:20.362 Library numa found: YES 00:02:20.362 Has header "numaif.h" : YES 00:02:20.362 Library fdt found: NO 00:02:20.362 Library execinfo found: NO 00:02:20.362 Has header "execinfo.h" : YES 00:02:20.362 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:20.362 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:20.362 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:20.362 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:20.362 Run-time dependency openssl found: YES 3.1.1 00:02:20.362 Run-time dependency libpcap found: YES 1.10.4 00:02:20.362 Has header "pcap.h" with dependency libpcap: YES 00:02:20.362 Compiler for C supports arguments -Wcast-qual: YES 00:02:20.362 Compiler for C supports arguments -Wdeprecated: YES 00:02:20.362 Compiler for C supports arguments -Wformat: YES 00:02:20.362 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:20.362 Compiler for C supports arguments -Wformat-security: NO 00:02:20.362 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:20.362 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:20.362 Compiler for C supports arguments -Wnested-externs: YES 00:02:20.362 Compiler for C supports arguments -Wold-style-definition: YES 00:02:20.362 Compiler for C supports arguments -Wpointer-arith: YES 00:02:20.362 Compiler for C supports arguments -Wsign-compare: YES 00:02:20.362 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:20.362 Compiler for C supports arguments -Wundef: YES 00:02:20.362 Compiler for C supports arguments -Wwrite-strings: YES 00:02:20.362 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:20.362 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:20.362 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:20.362 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:20.362 Compiler for C supports arguments -mavx512f: YES 00:02:20.362 Checking if "AVX512 checking" compiles: YES 00:02:20.362 Fetching value of define "__SSE4_2__" : 1 00:02:20.362 Fetching value of define "__AES__" : 1 00:02:20.362 Fetching value of define "__AVX__" : 1 00:02:20.362 Fetching value of define "__AVX2__" : (undefined) 00:02:20.362 Fetching value of define "__AVX512BW__" : (undefined) 00:02:20.362 Fetching value of define "__AVX512CD__" : (undefined) 00:02:20.362 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:20.362 Fetching value of define "__AVX512F__" : (undefined) 00:02:20.362 Fetching value of define "__AVX512VL__" : (undefined) 00:02:20.362 Fetching value of define "__PCLMUL__" : 1 00:02:20.362 Fetching value of define "__RDRND__" : 1 00:02:20.362 Fetching value of define "__RDSEED__" : (undefined) 00:02:20.362 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:20.362 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:20.362 Message: lib/kvargs: Defining dependency "kvargs" 00:02:20.362 Message: lib/telemetry: Defining dependency "telemetry" 00:02:20.362 Checking for function "getentropy" : YES 00:02:20.362 Message: lib/eal: Defining dependency "eal" 00:02:20.362 Message: lib/ring: Defining dependency "ring" 00:02:20.362 Message: lib/rcu: Defining dependency "rcu" 00:02:20.362 Message: lib/mempool: Defining dependency "mempool" 00:02:20.362 Message: lib/mbuf: Defining dependency "mbuf" 00:02:20.362 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:20.362 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:20.362 Compiler for C supports arguments -mpclmul: YES 00:02:20.362 Compiler for C supports arguments -maes: YES 00:02:20.362 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:20.362 Compiler for C supports arguments -mavx512bw: YES 00:02:20.362 Compiler for C supports arguments -mavx512dq: YES 00:02:20.362 Compiler for C supports arguments -mavx512vl: YES 00:02:20.362 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:20.362 Compiler for C supports arguments -mavx2: YES 00:02:20.362 Compiler for C supports arguments -mavx: YES 00:02:20.362 Message: lib/net: Defining dependency "net" 00:02:20.362 Message: lib/meter: Defining dependency "meter" 00:02:20.362 Message: lib/ethdev: Defining dependency "ethdev" 00:02:20.362 Message: lib/pci: Defining dependency "pci" 00:02:20.362 Message: lib/cmdline: Defining dependency "cmdline" 00:02:20.362 Message: lib/metrics: Defining dependency "metrics" 00:02:20.362 Message: lib/hash: Defining dependency "hash" 00:02:20.362 Message: lib/timer: Defining dependency "timer" 00:02:20.362 Fetching value of define "__AVX2__" : (undefined) (cached) 00:02:20.362 Compiler for C supports arguments -mavx2: YES (cached) 00:02:20.362 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:20.362 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:20.362 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:20.362 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:20.362 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:20.362 Message: lib/acl: Defining dependency "acl" 00:02:20.362 Message: lib/bbdev: Defining dependency "bbdev" 00:02:20.362 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:20.362 Run-time dependency libelf found: YES 0.191 00:02:20.362 Message: lib/bpf: Defining dependency "bpf" 00:02:20.362 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:20.362 Message: lib/compressdev: Defining dependency "compressdev" 00:02:20.362 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:20.362 Message: lib/distributor: Defining dependency "distributor" 00:02:20.362 Message: lib/efd: Defining dependency "efd" 00:02:20.362 Message: lib/eventdev: Defining dependency "eventdev" 00:02:20.362 Message: lib/gpudev: Defining dependency "gpudev" 00:02:20.362 Message: lib/gro: Defining dependency "gro" 00:02:20.362 Message: lib/gso: Defining dependency "gso" 00:02:20.362 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:20.362 Message: lib/jobstats: Defining dependency "jobstats" 00:02:20.362 Message: lib/latencystats: Defining dependency "latencystats" 00:02:20.362 Message: lib/lpm: Defining dependency "lpm" 00:02:20.362 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:20.362 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:20.362 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:20.362 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:20.362 Message: lib/member: Defining dependency "member" 00:02:20.362 Message: lib/pcapng: Defining dependency "pcapng" 00:02:20.362 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:20.362 Message: lib/power: Defining dependency "power" 00:02:20.362 Message: lib/rawdev: Defining dependency "rawdev" 00:02:20.362 Message: lib/regexdev: Defining dependency "regexdev" 00:02:20.362 Message: lib/dmadev: Defining dependency "dmadev" 00:02:20.362 Message: lib/rib: Defining dependency "rib" 00:02:20.362 Message: lib/reorder: Defining dependency "reorder" 00:02:20.362 Message: lib/sched: Defining dependency "sched" 00:02:20.362 Message: lib/security: Defining dependency "security" 00:02:20.362 Message: lib/stack: Defining dependency "stack" 00:02:20.362 Has header "linux/userfaultfd.h" : YES 00:02:20.362 Message: lib/vhost: Defining dependency "vhost" 00:02:20.362 Message: lib/ipsec: Defining dependency "ipsec" 00:02:20.362 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:20.362 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:20.362 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:20.362 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:20.362 Message: lib/fib: Defining dependency "fib" 00:02:20.362 Message: lib/port: Defining dependency "port" 00:02:20.362 Message: lib/pdump: Defining dependency "pdump" 00:02:20.362 Message: lib/table: Defining dependency "table" 00:02:20.362 Message: lib/pipeline: Defining dependency "pipeline" 00:02:20.362 Message: lib/graph: Defining dependency "graph" 00:02:20.362 Message: lib/node: Defining dependency "node" 00:02:20.362 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:20.362 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:20.362 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:20.362 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:20.362 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:20.362 Compiler for C supports arguments -Wno-unused-value: YES 00:02:21.304 Compiler for C supports arguments -Wno-format: YES 00:02:21.304 Compiler for C supports arguments -Wno-format-security: YES 00:02:21.304 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:21.304 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:21.304 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:21.304 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:21.304 Fetching value of define "__AVX2__" : (undefined) (cached) 00:02:21.304 Compiler for C supports arguments -mavx2: YES (cached) 00:02:21.304 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:21.304 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.304 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:21.304 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:21.304 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:21.304 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:21.304 Configuring doxy-api.conf using configuration 00:02:21.304 Program sphinx-build found: NO 00:02:21.304 Configuring rte_build_config.h using configuration 00:02:21.304 Message: 00:02:21.304 ================= 00:02:21.304 Applications Enabled 00:02:21.304 ================= 00:02:21.304 00:02:21.304 apps: 00:02:21.304 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:21.304 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:21.304 test-security-perf, 00:02:21.304 00:02:21.304 Message: 00:02:21.304 ================= 00:02:21.304 Libraries Enabled 00:02:21.304 ================= 00:02:21.304 00:02:21.304 libs: 00:02:21.304 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:21.304 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:21.304 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:21.304 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:21.304 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:21.304 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:21.304 table, pipeline, graph, node, 00:02:21.304 00:02:21.304 Message: 00:02:21.304 =============== 00:02:21.304 Drivers Enabled 00:02:21.304 =============== 00:02:21.304 00:02:21.304 common: 00:02:21.304 00:02:21.304 bus: 00:02:21.304 pci, vdev, 00:02:21.304 mempool: 00:02:21.304 ring, 00:02:21.304 dma: 00:02:21.304 00:02:21.304 net: 00:02:21.304 i40e, 00:02:21.304 raw: 00:02:21.304 00:02:21.304 crypto: 00:02:21.304 00:02:21.304 compress: 00:02:21.304 00:02:21.304 regex: 00:02:21.304 00:02:21.304 vdpa: 00:02:21.304 00:02:21.304 event: 00:02:21.304 00:02:21.304 baseband: 00:02:21.304 00:02:21.304 gpu: 00:02:21.304 00:02:21.304 00:02:21.304 Message: 00:02:21.304 ================= 00:02:21.304 Content Skipped 00:02:21.304 ================= 00:02:21.304 00:02:21.304 apps: 00:02:21.304 00:02:21.304 libs: 00:02:21.304 kni: explicitly disabled via build config (deprecated lib) 00:02:21.304 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:21.304 00:02:21.304 drivers: 00:02:21.304 common/cpt: not in enabled drivers build config 00:02:21.304 common/dpaax: not in enabled drivers build config 00:02:21.305 common/iavf: not in enabled drivers build config 00:02:21.305 common/idpf: not in enabled drivers build config 00:02:21.305 common/mvep: not in enabled drivers build config 00:02:21.305 common/octeontx: not in enabled drivers build config 00:02:21.305 bus/auxiliary: not in enabled drivers build config 00:02:21.305 bus/dpaa: not in enabled drivers build config 00:02:21.305 bus/fslmc: not in enabled drivers build config 00:02:21.305 bus/ifpga: not in enabled drivers build config 00:02:21.305 bus/vmbus: not in enabled drivers build config 00:02:21.305 common/cnxk: not in enabled drivers build config 00:02:21.305 common/mlx5: not in enabled drivers build config 00:02:21.305 common/qat: not in enabled drivers build config 00:02:21.305 common/sfc_efx: not in enabled drivers build config 00:02:21.305 mempool/bucket: not in enabled drivers build config 00:02:21.305 mempool/cnxk: not in enabled drivers build config 00:02:21.305 mempool/dpaa: not in enabled drivers build config 00:02:21.305 mempool/dpaa2: not in enabled drivers build config 00:02:21.305 mempool/octeontx: not in enabled drivers build config 00:02:21.305 mempool/stack: not in enabled drivers build config 00:02:21.305 dma/cnxk: not in enabled drivers build config 00:02:21.305 dma/dpaa: not in enabled drivers build config 00:02:21.305 dma/dpaa2: not in enabled drivers build config 00:02:21.305 dma/hisilicon: not in enabled drivers build config 00:02:21.305 dma/idxd: not in enabled drivers build config 00:02:21.305 dma/ioat: not in enabled drivers build config 00:02:21.305 dma/skeleton: not in enabled drivers build config 00:02:21.305 net/af_packet: not in enabled drivers build config 00:02:21.305 net/af_xdp: not in enabled drivers build config 00:02:21.305 net/ark: not in enabled drivers build config 00:02:21.305 net/atlantic: not in enabled drivers build config 00:02:21.305 net/avp: not in enabled drivers build config 00:02:21.305 net/axgbe: not in enabled drivers build config 00:02:21.305 net/bnx2x: not in enabled drivers build config 00:02:21.305 net/bnxt: not in enabled drivers build config 00:02:21.305 net/bonding: not in enabled drivers build config 00:02:21.305 net/cnxk: not in enabled drivers build config 00:02:21.305 net/cxgbe: not in enabled drivers build config 00:02:21.305 net/dpaa: not in enabled drivers build config 00:02:21.305 net/dpaa2: not in enabled drivers build config 00:02:21.305 net/e1000: not in enabled drivers build config 00:02:21.305 net/ena: not in enabled drivers build config 00:02:21.305 net/enetc: not in enabled drivers build config 00:02:21.305 net/enetfec: not in enabled drivers build config 00:02:21.305 net/enic: not in enabled drivers build config 00:02:21.305 net/failsafe: not in enabled drivers build config 00:02:21.305 net/fm10k: not in enabled drivers build config 00:02:21.305 net/gve: not in enabled drivers build config 00:02:21.305 net/hinic: not in enabled drivers build config 00:02:21.305 net/hns3: not in enabled drivers build config 00:02:21.305 net/iavf: not in enabled drivers build config 00:02:21.305 net/ice: not in enabled drivers build config 00:02:21.305 net/idpf: not in enabled drivers build config 00:02:21.305 net/igc: not in enabled drivers build config 00:02:21.305 net/ionic: not in enabled drivers build config 00:02:21.305 net/ipn3ke: not in enabled drivers build config 00:02:21.305 net/ixgbe: not in enabled drivers build config 00:02:21.305 net/kni: not in enabled drivers build config 00:02:21.305 net/liquidio: not in enabled drivers build config 00:02:21.305 net/mana: not in enabled drivers build config 00:02:21.305 net/memif: not in enabled drivers build config 00:02:21.305 net/mlx4: not in enabled drivers build config 00:02:21.305 net/mlx5: not in enabled drivers build config 00:02:21.305 net/mvneta: not in enabled drivers build config 00:02:21.305 net/mvpp2: not in enabled drivers build config 00:02:21.305 net/netvsc: not in enabled drivers build config 00:02:21.305 net/nfb: not in enabled drivers build config 00:02:21.305 net/nfp: not in enabled drivers build config 00:02:21.305 net/ngbe: not in enabled drivers build config 00:02:21.305 net/null: not in enabled drivers build config 00:02:21.305 net/octeontx: not in enabled drivers build config 00:02:21.305 net/octeon_ep: not in enabled drivers build config 00:02:21.305 net/pcap: not in enabled drivers build config 00:02:21.305 net/pfe: not in enabled drivers build config 00:02:21.305 net/qede: not in enabled drivers build config 00:02:21.305 net/ring: not in enabled drivers build config 00:02:21.305 net/sfc: not in enabled drivers build config 00:02:21.305 net/softnic: not in enabled drivers build config 00:02:21.305 net/tap: not in enabled drivers build config 00:02:21.305 net/thunderx: not in enabled drivers build config 00:02:21.305 net/txgbe: not in enabled drivers build config 00:02:21.305 net/vdev_netvsc: not in enabled drivers build config 00:02:21.305 net/vhost: not in enabled drivers build config 00:02:21.305 net/virtio: not in enabled drivers build config 00:02:21.305 net/vmxnet3: not in enabled drivers build config 00:02:21.305 raw/cnxk_bphy: not in enabled drivers build config 00:02:21.305 raw/cnxk_gpio: not in enabled drivers build config 00:02:21.305 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:21.305 raw/ifpga: not in enabled drivers build config 00:02:21.305 raw/ntb: not in enabled drivers build config 00:02:21.305 raw/skeleton: not in enabled drivers build config 00:02:21.305 crypto/armv8: not in enabled drivers build config 00:02:21.305 crypto/bcmfs: not in enabled drivers build config 00:02:21.305 crypto/caam_jr: not in enabled drivers build config 00:02:21.305 crypto/ccp: not in enabled drivers build config 00:02:21.305 crypto/cnxk: not in enabled drivers build config 00:02:21.305 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.305 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.305 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.305 crypto/mlx5: not in enabled drivers build config 00:02:21.305 crypto/mvsam: not in enabled drivers build config 00:02:21.305 crypto/nitrox: not in enabled drivers build config 00:02:21.305 crypto/null: not in enabled drivers build config 00:02:21.305 crypto/octeontx: not in enabled drivers build config 00:02:21.305 crypto/openssl: not in enabled drivers build config 00:02:21.305 crypto/scheduler: not in enabled drivers build config 00:02:21.305 crypto/uadk: not in enabled drivers build config 00:02:21.305 crypto/virtio: not in enabled drivers build config 00:02:21.305 compress/isal: not in enabled drivers build config 00:02:21.305 compress/mlx5: not in enabled drivers build config 00:02:21.305 compress/octeontx: not in enabled drivers build config 00:02:21.305 compress/zlib: not in enabled drivers build config 00:02:21.305 regex/mlx5: not in enabled drivers build config 00:02:21.305 regex/cn9k: not in enabled drivers build config 00:02:21.305 vdpa/ifc: not in enabled drivers build config 00:02:21.305 vdpa/mlx5: not in enabled drivers build config 00:02:21.305 vdpa/sfc: not in enabled drivers build config 00:02:21.305 event/cnxk: not in enabled drivers build config 00:02:21.305 event/dlb2: not in enabled drivers build config 00:02:21.305 event/dpaa: not in enabled drivers build config 00:02:21.305 event/dpaa2: not in enabled drivers build config 00:02:21.305 event/dsw: not in enabled drivers build config 00:02:21.305 event/opdl: not in enabled drivers build config 00:02:21.305 event/skeleton: not in enabled drivers build config 00:02:21.305 event/sw: not in enabled drivers build config 00:02:21.305 event/octeontx: not in enabled drivers build config 00:02:21.305 baseband/acc: not in enabled drivers build config 00:02:21.305 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:21.305 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:21.305 baseband/la12xx: not in enabled drivers build config 00:02:21.305 baseband/null: not in enabled drivers build config 00:02:21.305 baseband/turbo_sw: not in enabled drivers build config 00:02:21.305 gpu/cuda: not in enabled drivers build config 00:02:21.305 00:02:21.305 00:02:21.305 Build targets in project: 316 00:02:21.305 00:02:21.305 DPDK 22.11.4 00:02:21.305 00:02:21.305 User defined options 00:02:21.305 libdir : lib 00:02:21.305 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:21.305 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:21.305 c_link_args : 00:02:21.305 enable_docs : false 00:02:21.305 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:21.305 enable_kmods : false 00:02:21.305 machine : native 00:02:21.305 tests : false 00:02:21.305 00:02:21.305 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.305 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:21.305 20:03:33 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:02:21.305 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:21.573 [1/745] Generating lib/rte_kvargs_mingw with a custom command 00:02:21.573 [2/745] Generating lib/rte_telemetry_mingw with a custom command 00:02:21.573 [3/745] Generating lib/rte_kvargs_def with a custom command 00:02:21.573 [4/745] Generating lib/rte_telemetry_def with a custom command 00:02:21.573 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:21.573 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:21.573 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:21.573 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:21.573 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:21.573 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:21.573 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:21.573 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:21.573 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:21.573 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:21.573 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:21.573 [16/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:21.573 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:21.573 [18/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:21.573 [19/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:21.573 [20/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:21.573 [21/745] Linking static target lib/librte_kvargs.a 00:02:21.573 [22/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:21.573 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:21.573 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:21.573 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:21.573 [26/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:21.573 [27/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:21.573 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:21.573 [29/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:21.573 [30/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:21.837 [31/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:21.837 [32/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:21.837 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:21.837 [34/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:21.837 [35/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:21.837 [36/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:21.837 [37/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:21.837 [38/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:21.837 [39/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:21.837 [40/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:21.837 [41/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:21.837 [42/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:21.837 [43/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:21.837 [44/745] Generating lib/rte_eal_mingw with a custom command 00:02:21.837 [45/745] Generating lib/rte_eal_def with a custom command 00:02:21.837 [46/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:21.837 [47/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:21.837 [48/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:21.837 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:21.837 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:21.837 [51/745] Generating lib/rte_ring_def with a custom command 00:02:21.837 [52/745] Generating lib/rte_rcu_def with a custom command 00:02:21.837 [53/745] Generating lib/rte_mempool_mingw with a custom command 00:02:21.837 [54/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:21.837 [55/745] Generating lib/rte_ring_mingw with a custom command 00:02:21.837 [56/745] Generating lib/rte_rcu_mingw with a custom command 00:02:21.837 [57/745] Generating lib/rte_mempool_def with a custom command 00:02:21.837 [58/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:21.837 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:21.837 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:21.837 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:21.837 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:21.837 [63/745] Generating lib/rte_mbuf_def with a custom command 00:02:21.837 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:21.837 [65/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:21.837 [66/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:21.837 [67/745] Generating lib/rte_mbuf_mingw with a custom command 00:02:21.837 [68/745] Generating lib/rte_net_def with a custom command 00:02:21.837 [69/745] Generating lib/rte_net_mingw with a custom command 00:02:21.837 [70/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:21.837 [71/745] Generating lib/rte_meter_def with a custom command 00:02:21.837 [72/745] Generating lib/rte_meter_mingw with a custom command 00:02:21.837 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:21.837 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:21.837 [75/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:21.837 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:22.107 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:22.107 [78/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:22.107 [79/745] Linking static target lib/librte_ring.a 00:02:22.107 [80/745] Generating lib/rte_ethdev_def with a custom command 00:02:22.107 [81/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:22.107 [82/745] Generating lib/rte_ethdev_mingw with a custom command 00:02:22.107 [83/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.107 [84/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:22.107 [85/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:22.107 [86/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:22.107 [87/745] Linking static target lib/librte_meter.a 00:02:22.107 [88/745] Generating lib/rte_pci_def with a custom command 00:02:22.107 [89/745] Linking target lib/librte_kvargs.so.23.0 00:02:22.107 [90/745] Generating lib/rte_pci_mingw with a custom command 00:02:22.374 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:22.374 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:22.374 [93/745] Linking static target lib/librte_pci.a 00:02:22.374 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:22.374 [95/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:22.374 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:22.374 [97/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:22.374 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:22.638 [99/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:22.638 [100/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.638 [101/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.639 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:22.639 [103/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:22.639 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:22.639 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:22.639 [106/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:22.639 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:22.639 [108/745] Linking static target lib/librte_telemetry.a 00:02:22.639 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:22.639 [110/745] Generating lib/rte_cmdline_def with a custom command 00:02:22.639 [111/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.639 [112/745] Generating lib/rte_cmdline_mingw with a custom command 00:02:22.639 [113/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:22.639 [114/745] Generating lib/rte_metrics_def with a custom command 00:02:22.639 [115/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:22.639 [116/745] Generating lib/rte_metrics_mingw with a custom command 00:02:22.639 [117/745] Generating lib/rte_hash_mingw with a custom command 00:02:22.639 [118/745] Generating lib/rte_hash_def with a custom command 00:02:22.639 [119/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:22.639 [120/745] Generating lib/rte_timer_def with a custom command 00:02:22.910 [121/745] Generating lib/rte_timer_mingw with a custom command 00:02:22.910 [122/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:22.910 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:22.910 [124/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:22.910 [125/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:22.910 [126/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:22.910 [127/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:23.182 [128/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:23.182 [129/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:23.182 [130/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:23.182 [131/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:23.182 [132/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:23.182 [133/745] Generating lib/rte_acl_mingw with a custom command 00:02:23.182 [134/745] Generating lib/rte_acl_def with a custom command 00:02:23.182 [135/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:23.182 [136/745] Generating lib/rte_bbdev_def with a custom command 00:02:23.182 [137/745] Generating lib/rte_bbdev_mingw with a custom command 00:02:23.182 [138/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:23.182 [139/745] Generating lib/rte_bitratestats_mingw with a custom command 00:02:23.182 [140/745] Generating lib/rte_bitratestats_def with a custom command 00:02:23.182 [141/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:23.182 [142/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:23.182 [143/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:23.182 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:23.182 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:23.182 [146/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.182 [147/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:23.447 [148/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:23.447 [149/745] Generating lib/rte_bpf_def with a custom command 00:02:23.447 [150/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:23.447 [151/745] Generating lib/rte_bpf_mingw with a custom command 00:02:23.447 [152/745] Linking target lib/librte_telemetry.so.23.0 00:02:23.447 [153/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:23.447 [154/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:23.447 [155/745] Generating lib/rte_cfgfile_def with a custom command 00:02:23.447 [156/745] Generating lib/rte_cfgfile_mingw with a custom command 00:02:23.447 [157/745] Generating lib/rte_compressdev_def with a custom command 00:02:23.447 [158/745] Generating lib/rte_compressdev_mingw with a custom command 00:02:23.447 [159/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:23.447 [160/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:23.447 [161/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:23.447 [162/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:23.447 [163/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:23.447 [164/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:23.447 [165/745] Linking static target lib/librte_rcu.a 00:02:23.447 [166/745] Generating lib/rte_cryptodev_def with a custom command 00:02:23.447 [167/745] Generating lib/rte_cryptodev_mingw with a custom command 00:02:23.447 [168/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:23.712 [169/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:23.712 [170/745] Linking static target lib/librte_timer.a 00:02:23.712 [171/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:23.712 [172/745] Generating lib/rte_distributor_def with a custom command 00:02:23.713 [173/745] Linking static target lib/librte_cmdline.a 00:02:23.713 [174/745] Generating lib/rte_distributor_mingw with a custom command 00:02:23.713 [175/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:23.713 [176/745] Generating lib/rte_efd_def with a custom command 00:02:23.713 [177/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:23.713 [178/745] Linking static target lib/librte_net.a 00:02:23.713 [179/745] Generating lib/rte_efd_mingw with a custom command 00:02:23.979 [180/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:23.979 [181/745] Linking static target lib/librte_mempool.a 00:02:23.979 [182/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:23.979 [183/745] Linking static target lib/librte_metrics.a 00:02:23.979 [184/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:23.979 [185/745] Linking static target lib/librte_cfgfile.a 00:02:23.979 [186/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.979 [187/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.245 [188/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.245 [189/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:24.245 [190/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:24.245 [191/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:24.245 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:24.245 [193/745] Linking static target lib/librte_eal.a 00:02:24.245 [194/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:24.514 [195/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:24.514 [196/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:24.514 [197/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:24.514 [198/745] Linking static target lib/librte_bitratestats.a 00:02:24.514 [199/745] Generating lib/rte_eventdev_def with a custom command 00:02:24.514 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:24.514 [201/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:24.514 [202/745] Generating lib/rte_eventdev_mingw with a custom command 00:02:24.514 [203/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.514 [204/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.514 [205/745] Generating lib/rte_gpudev_def with a custom command 00:02:24.514 [206/745] Generating lib/rte_gpudev_mingw with a custom command 00:02:24.514 [207/745] Generating lib/rte_gro_def with a custom command 00:02:24.514 [208/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.514 [209/745] Generating lib/rte_gro_mingw with a custom command 00:02:24.782 [210/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:24.782 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:24.782 [212/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:24.782 [213/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.782 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.782 [215/745] Generating lib/rte_gso_def with a custom command 00:02:24.782 [216/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:24.782 [217/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:24.782 [218/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:24.782 [219/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:24.782 [220/745] Generating lib/rte_gso_mingw with a custom command 00:02:25.044 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:25.044 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:25.044 [223/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:25.044 [224/745] Linking static target lib/librte_bbdev.a 00:02:25.044 [225/745] Generating lib/rte_ip_frag_def with a custom command 00:02:25.044 [226/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:25.044 [227/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.044 [228/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.044 [229/745] Generating lib/rte_ip_frag_mingw with a custom command 00:02:25.044 [230/745] Generating lib/rte_jobstats_def with a custom command 00:02:25.044 [231/745] Generating lib/rte_jobstats_mingw with a custom command 00:02:25.044 [232/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:25.308 [233/745] Generating lib/rte_latencystats_mingw with a custom command 00:02:25.308 [234/745] Generating lib/rte_latencystats_def with a custom command 00:02:25.308 [235/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:25.308 [236/745] Linking static target lib/librte_compressdev.a 00:02:25.308 [237/745] Generating lib/rte_lpm_def with a custom command 00:02:25.308 [238/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:25.308 [239/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:25.308 [240/745] Linking static target lib/librte_jobstats.a 00:02:25.308 [241/745] Generating lib/rte_lpm_mingw with a custom command 00:02:25.308 [242/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:25.308 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:25.583 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:25.583 [245/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:25.583 [246/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:25.583 [247/745] Linking static target lib/librte_distributor.a 00:02:25.583 [248/745] Generating lib/rte_member_def with a custom command 00:02:25.845 [249/745] Generating lib/rte_member_mingw with a custom command 00:02:25.845 [250/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.845 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:25.845 [252/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.845 [253/745] Generating lib/rte_pcapng_def with a custom command 00:02:25.846 [254/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:25.846 [255/745] Generating lib/rte_pcapng_mingw with a custom command 00:02:25.846 [256/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:26.109 [257/745] Linking static target lib/librte_bpf.a 00:02:26.109 [258/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:26.109 [259/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.109 [260/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:26.109 [261/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:26.109 [262/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:26.109 [263/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:26.109 [264/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:26.109 [265/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:26.109 [266/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:26.109 [267/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:26.109 [268/745] Linking static target lib/librte_gpudev.a 00:02:26.109 [269/745] Generating lib/rte_power_mingw with a custom command 00:02:26.109 [270/745] Generating lib/rte_power_def with a custom command 00:02:26.109 [271/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:26.109 [272/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:26.109 [273/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:26.109 [274/745] Generating lib/rte_rawdev_mingw with a custom command 00:02:26.109 [275/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:26.378 [276/745] Linking static target lib/librte_gro.a 00:02:26.378 [277/745] Generating lib/rte_rawdev_def with a custom command 00:02:26.378 [278/745] Generating lib/rte_regexdev_def with a custom command 00:02:26.378 [279/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:26.378 [280/745] Generating lib/rte_regexdev_mingw with a custom command 00:02:26.378 [281/745] Generating lib/rte_dmadev_def with a custom command 00:02:26.378 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:02:26.378 [283/745] Generating lib/rte_rib_def with a custom command 00:02:26.378 [284/745] Generating lib/rte_rib_mingw with a custom command 00:02:26.378 [285/745] Generating lib/rte_reorder_def with a custom command 00:02:26.378 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:26.378 [287/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.644 [288/745] Generating lib/rte_reorder_mingw with a custom command 00:02:26.644 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:26.644 [290/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:26.644 [291/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.644 [292/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:26.644 [293/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:26.644 [294/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:26.644 [295/745] Linking static target lib/librte_latencystats.a 00:02:26.644 [296/745] Generating lib/rte_sched_mingw with a custom command 00:02:26.644 [297/745] Generating lib/rte_sched_def with a custom command 00:02:26.644 [298/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:26.644 [299/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:26.644 [300/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:26.644 [301/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:26.644 [302/745] Generating lib/rte_security_def with a custom command 00:02:26.644 [303/745] Generating lib/rte_security_mingw with a custom command 00:02:26.644 [304/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:26.644 [305/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.644 [306/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:26.644 [307/745] Generating lib/rte_stack_def with a custom command 00:02:26.909 [308/745] Generating lib/rte_stack_mingw with a custom command 00:02:26.909 [309/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:26.909 [310/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:26.909 [311/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:26.909 [312/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:26.909 [313/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:26.909 [314/745] Linking static target lib/librte_rawdev.a 00:02:26.909 [315/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:26.909 [316/745] Linking static target lib/librte_stack.a 00:02:26.909 [317/745] Generating lib/rte_vhost_mingw with a custom command 00:02:26.909 [318/745] Generating lib/rte_vhost_def with a custom command 00:02:26.909 [319/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:26.909 [320/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:26.909 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:26.909 [322/745] Linking static target lib/librte_dmadev.a 00:02:27.177 [323/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.177 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:27.177 [325/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:27.177 [326/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:27.177 [327/745] Linking static target lib/librte_ip_frag.a 00:02:27.177 [328/745] Generating lib/rte_ipsec_def with a custom command 00:02:27.177 [329/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.177 [330/745] Generating lib/rte_ipsec_mingw with a custom command 00:02:27.442 [331/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:27.442 [332/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:27.442 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:27.442 [334/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:27.713 [335/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:27.713 [336/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.713 [337/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.713 [338/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.713 [339/745] Generating lib/rte_fib_def with a custom command 00:02:27.713 [340/745] Generating lib/rte_fib_mingw with a custom command 00:02:27.713 [341/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:27.713 [342/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:27.713 [343/745] Linking static target lib/librte_regexdev.a 00:02:27.978 [344/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:27.978 [345/745] Linking static target lib/librte_gso.a 00:02:27.978 [346/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:27.978 [347/745] Linking static target lib/librte_efd.a 00:02:27.978 [348/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.978 [349/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:27.978 [350/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:27.978 [351/745] Linking static target lib/librte_pcapng.a 00:02:28.242 [352/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:28.242 [353/745] Linking static target lib/librte_lpm.a 00:02:28.242 [354/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:28.242 [355/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:28.242 [356/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.242 [357/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:28.242 [358/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:28.242 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:28.242 [360/745] Linking static target lib/librte_reorder.a 00:02:28.513 [361/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.513 [362/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:28.513 [363/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:28.513 [364/745] Linking static target lib/acl/libavx2_tmp.a 00:02:28.513 [365/745] Generating lib/rte_port_def with a custom command 00:02:28.513 [366/745] Generating lib/rte_port_mingw with a custom command 00:02:28.513 [367/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:28.513 [368/745] Generating lib/rte_pdump_def with a custom command 00:02:28.513 [369/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:28.513 [370/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:28.513 [371/745] Generating lib/rte_pdump_mingw with a custom command 00:02:28.513 [372/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:28.513 [373/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:28.777 [374/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:28.777 [375/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:28.777 [376/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:28.777 [377/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:28.777 [378/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:28.777 [379/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.777 [380/745] Linking static target lib/librte_security.a 00:02:28.777 [381/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.777 [382/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.777 [383/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:28.777 [384/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:28.777 [385/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:28.777 [386/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:28.777 [387/745] Linking static target lib/librte_power.a 00:02:29.042 [388/745] Linking static target lib/librte_hash.a 00:02:29.042 [389/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.042 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:29.042 [391/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:29.042 [392/745] Linking static target lib/librte_rib.a 00:02:29.042 [393/745] Linking static target lib/acl/libavx512_tmp.a 00:02:29.042 [394/745] Linking static target lib/librte_acl.a 00:02:29.042 [395/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:29.314 [396/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:29.314 [397/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:29.314 [398/745] Generating lib/rte_table_def with a custom command 00:02:29.314 [399/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:29.314 [400/745] Generating lib/rte_table_mingw with a custom command 00:02:29.314 [401/745] Linking static target lib/librte_ethdev.a 00:02:29.314 [402/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.576 [403/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.841 [404/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:29.841 [405/745] Linking static target lib/librte_mbuf.a 00:02:29.841 [406/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.841 [407/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:29.841 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:29.841 [409/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:29.841 [410/745] Generating lib/rte_pipeline_def with a custom command 00:02:29.841 [411/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:29.841 [412/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:29.841 [413/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:30.112 [414/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:30.112 [415/745] Generating lib/rte_pipeline_mingw with a custom command 00:02:30.112 [416/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:30.112 [417/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:30.112 [418/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:30.112 [419/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:30.112 [420/745] Generating lib/rte_graph_def with a custom command 00:02:30.112 [421/745] Generating lib/rte_graph_mingw with a custom command 00:02:30.112 [422/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:30.112 [423/745] Linking static target lib/librte_fib.a 00:02:30.112 [424/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.112 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:30.112 [426/745] Linking static target lib/librte_member.a 00:02:30.373 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:30.373 [428/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:30.373 [429/745] Linking static target lib/librte_eventdev.a 00:02:30.373 [430/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:30.373 [431/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:30.373 [432/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:30.373 [433/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.373 [434/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:30.373 [435/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:30.373 [436/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:30.654 [437/745] Generating lib/rte_node_def with a custom command 00:02:30.654 [438/745] Generating lib/rte_node_mingw with a custom command 00:02:30.654 [439/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:30.654 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.654 [441/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:30.654 [442/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.654 [443/745] Linking static target lib/librte_sched.a 00:02:30.654 [444/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:30.654 [445/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.654 [446/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:30.654 [447/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:30.654 [448/745] Generating drivers/rte_bus_pci_def with a custom command 00:02:30.654 [449/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:30.916 [450/745] Generating drivers/rte_bus_vdev_def with a custom command 00:02:30.916 [451/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:30.916 [452/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:30.916 [453/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:30.916 [454/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:30.916 [455/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:30.916 [456/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:30.916 [457/745] Linking static target lib/librte_cryptodev.a 00:02:30.916 [458/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:30.916 [459/745] Generating drivers/rte_mempool_ring_def with a custom command 00:02:31.178 [460/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:31.178 [461/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:31.178 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:31.178 [463/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:31.178 [464/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:31.178 [465/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:31.178 [466/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:31.178 [467/745] Linking static target lib/librte_pdump.a 00:02:31.178 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:31.178 [469/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:31.178 [470/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:31.178 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:31.446 [472/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:31.446 [473/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:31.446 [474/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:31.446 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:31.446 [476/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.446 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:31.446 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:31.446 [479/745] Generating drivers/rte_net_i40e_def with a custom command 00:02:31.725 [480/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:31.725 [481/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:31.725 [482/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:31.725 [483/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.725 [484/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:31.725 [485/745] Linking static target lib/librte_table.a 00:02:31.725 [486/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:31.725 [487/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:31.725 [488/745] Linking static target lib/librte_ipsec.a 00:02:31.725 [489/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:31.991 [490/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.991 [491/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.991 [492/745] Linking static target drivers/librte_bus_vdev.a 00:02:31.992 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:31.992 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:32.273 [495/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:32.273 [496/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:32.273 [497/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.273 [498/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:32.273 [499/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:32.273 [500/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:32.273 [501/745] Linking static target lib/librte_graph.a 00:02:32.273 [502/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:32.549 [503/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:32.549 [504/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:32.549 [505/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.549 [506/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:32.549 [507/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:32.549 [508/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:32.549 [509/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.549 [510/745] Linking static target drivers/librte_bus_pci.a 00:02:32.549 [511/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:32.549 [512/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.815 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:32.815 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.086 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:33.354 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.354 [517/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:33.354 [518/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:33.354 [519/745] Linking static target lib/librte_port.a 00:02:33.354 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:33.354 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:33.618 [522/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.618 [523/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:33.618 [524/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:33.618 [525/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:33.618 [526/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:33.887 [527/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:33.887 [528/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.887 [529/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:33.887 [530/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:33.887 [531/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.887 [532/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.887 [533/745] Linking static target drivers/librte_mempool_ring.a 00:02:33.888 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:33.888 [535/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:34.153 [536/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:34.153 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:34.153 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:34.153 [539/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.420 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:34.684 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.684 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:34.684 [543/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:34.684 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:34.947 [545/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:34.947 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:34.947 [547/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:34.947 [548/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:34.947 [549/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:34.947 [550/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:35.245 [551/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:35.245 [552/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:35.510 [553/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:35.510 [554/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:35.776 [555/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:35.776 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:35.776 [557/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:35.776 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:35.776 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:36.036 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:36.036 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:36.302 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:36.302 [563/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:36.302 [564/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:36.302 [565/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:36.302 [566/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:36.565 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:36.565 [568/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:36.565 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:36.565 [570/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:36.565 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:36.565 [572/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:36.831 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:36.831 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:37.099 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:37.099 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:37.099 [577/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:37.099 [578/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:37.099 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:37.099 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:37.099 [581/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:37.099 [582/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:37.415 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:37.415 [584/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:37.415 [585/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:37.735 [586/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.735 [587/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:37.735 [588/745] Linking target lib/librte_eal.so.23.0 00:02:37.735 [589/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.002 [590/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:38.002 [591/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:38.002 [592/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:38.002 [593/745] Linking target lib/librte_pci.so.23.0 00:02:38.002 [594/745] Linking target lib/librte_meter.so.23.0 00:02:38.002 [595/745] Linking target lib/librte_timer.so.23.0 00:02:38.002 [596/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:38.002 [597/745] Linking target lib/librte_ring.so.23.0 00:02:38.275 [598/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:38.275 [599/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:38.275 [600/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:38.275 [601/745] Linking target lib/librte_acl.so.23.0 00:02:38.275 [602/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:38.275 [603/745] Linking target lib/librte_cfgfile.so.23.0 00:02:38.275 [604/745] Linking target lib/librte_jobstats.so.23.0 00:02:38.275 [605/745] Linking target lib/librte_dmadev.so.23.0 00:02:38.275 [606/745] Linking target lib/librte_stack.so.23.0 00:02:38.275 [607/745] Linking target lib/librte_rawdev.so.23.0 00:02:38.275 [608/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:38.275 [609/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:38.275 [610/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:38.275 [611/745] Linking target lib/librte_graph.so.23.0 00:02:38.275 [612/745] Linking target drivers/librte_bus_vdev.so.23.0 00:02:38.275 [613/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:38.275 [614/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:38.541 [615/745] Linking target drivers/librte_bus_pci.so.23.0 00:02:38.541 [616/745] Linking target lib/librte_rcu.so.23.0 00:02:38.541 [617/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:38.541 [618/745] Linking target lib/librte_mempool.so.23.0 00:02:38.541 [619/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:38.541 [620/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:38.541 [621/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:38.541 [622/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:38.541 [623/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:38.541 [624/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:38.541 [625/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:38.541 [626/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:38.541 [627/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:38.541 [628/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:38.541 [629/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:38.541 [630/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:38.541 [631/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:38.800 [632/745] Linking target drivers/librte_mempool_ring.so.23.0 00:02:38.800 [633/745] Linking target lib/librte_rib.so.23.0 00:02:38.800 [634/745] Linking target lib/librte_mbuf.so.23.0 00:02:38.800 [635/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:38.801 [636/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:38.801 [637/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:38.801 [638/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:38.801 [639/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:38.801 [640/745] Linking target lib/librte_bbdev.so.23.0 00:02:38.801 [641/745] Linking target lib/librte_regexdev.so.23.0 00:02:38.801 [642/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:38.801 [643/745] Linking target lib/librte_fib.so.23.0 00:02:38.801 [644/745] Linking target lib/librte_net.so.23.0 00:02:38.801 [645/745] Linking target lib/librte_reorder.so.23.0 00:02:38.801 [646/745] Linking target lib/librte_compressdev.so.23.0 00:02:38.801 [647/745] Linking target lib/librte_gpudev.so.23.0 00:02:38.801 [648/745] Linking target lib/librte_distributor.so.23.0 00:02:38.801 [649/745] Linking target lib/librte_sched.so.23.0 00:02:38.801 [650/745] Linking target lib/librte_cryptodev.so.23.0 00:02:38.801 [651/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:39.061 [652/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:39.061 [653/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:39.061 [654/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:39.061 [655/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:39.061 [656/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:39.061 [657/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:39.061 [658/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:39.061 [659/745] Linking target lib/librte_cmdline.so.23.0 00:02:39.061 [660/745] Linking target lib/librte_hash.so.23.0 00:02:39.061 [661/745] Linking target lib/librte_security.so.23.0 00:02:39.061 [662/745] Linking target lib/librte_ethdev.so.23.0 00:02:39.061 [663/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:39.320 [664/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:39.320 [665/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:39.320 [666/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:39.320 [667/745] Linking target lib/librte_lpm.so.23.0 00:02:39.320 [668/745] Linking target lib/librte_member.so.23.0 00:02:39.320 [669/745] Linking target lib/librte_efd.so.23.0 00:02:39.320 [670/745] Linking target lib/librte_ipsec.so.23.0 00:02:39.320 [671/745] Linking target lib/librte_pcapng.so.23.0 00:02:39.320 [672/745] Linking target lib/librte_metrics.so.23.0 00:02:39.320 [673/745] Linking target lib/librte_gso.so.23.0 00:02:39.320 [674/745] Linking target lib/librte_ip_frag.so.23.0 00:02:39.320 [675/745] Linking target lib/librte_gro.so.23.0 00:02:39.320 [676/745] Linking target lib/librte_power.so.23.0 00:02:39.320 [677/745] Linking target lib/librte_bpf.so.23.0 00:02:39.320 [678/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:39.320 [679/745] Linking target lib/librte_eventdev.so.23.0 00:02:39.320 [680/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:39.579 [681/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:39.579 [682/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:39.579 [683/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:39.579 [684/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:39.579 [685/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:39.579 [686/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:39.579 [687/745] Linking target lib/librte_bitratestats.so.23.0 00:02:39.579 [688/745] Linking target lib/librte_latencystats.so.23.0 00:02:39.579 [689/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:39.579 [690/745] Linking target lib/librte_port.so.23.0 00:02:39.579 [691/745] Linking target lib/librte_pdump.so.23.0 00:02:39.838 [692/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:39.838 [693/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:39.838 [694/745] Linking target lib/librte_table.so.23.0 00:02:39.838 [695/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:39.838 [696/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:39.838 [697/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:40.097 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:40.356 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:40.356 [700/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:40.356 [701/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:40.615 [702/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:40.615 [703/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:40.874 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:40.874 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:40.874 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:40.874 [707/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:40.874 [708/745] Linking static target drivers/librte_net_i40e.a 00:02:41.133 [709/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:41.392 [710/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:41.651 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.651 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:02:42.219 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:42.219 [714/745] Linking static target lib/librte_node.a 00:02:42.478 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.737 [716/745] Linking target lib/librte_node.so.23.0 00:02:42.997 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:43.256 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:44.193 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:52.311 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:24.385 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:24.385 [722/745] Linking static target lib/librte_vhost.a 00:03:24.385 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.385 [724/745] Linking target lib/librte_vhost.so.23.0 00:03:34.377 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:34.377 [726/745] Linking static target lib/librte_pipeline.a 00:03:34.377 [727/745] Linking target app/dpdk-test-fib 00:03:34.377 [728/745] Linking target app/dpdk-test-acl 00:03:34.377 [729/745] Linking target app/dpdk-test-gpudev 00:03:34.377 [730/745] Linking target app/dpdk-test-cmdline 00:03:34.377 [731/745] Linking target app/dpdk-dumpcap 00:03:34.377 [732/745] Linking target app/dpdk-test-pipeline 00:03:34.377 [733/745] Linking target app/dpdk-test-security-perf 00:03:34.377 [734/745] Linking target app/dpdk-test-sad 00:03:34.377 [735/745] Linking target app/dpdk-test-regex 00:03:34.377 [736/745] Linking target app/dpdk-test-flow-perf 00:03:34.377 [737/745] Linking target app/dpdk-pdump 00:03:34.377 [738/745] Linking target app/dpdk-proc-info 00:03:34.377 [739/745] Linking target app/dpdk-test-crypto-perf 00:03:34.377 [740/745] Linking target app/dpdk-test-bbdev 00:03:34.377 [741/745] Linking target app/dpdk-test-eventdev 00:03:34.377 [742/745] Linking target app/dpdk-test-compress-perf 00:03:34.377 [743/745] Linking target app/dpdk-testpmd 00:03:35.756 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.016 [745/745] Linking target lib/librte_pipeline.so.23.0 00:03:36.016 20:04:47 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:36.016 20:04:47 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:36.016 20:04:47 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:36.016 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:36.016 [0/1] Installing files. 00:03:36.277 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.277 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.278 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:36.279 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:36.280 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:36.280 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:36.280 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:36.280 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:36.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:36.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:36.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:36.545 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.545 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.546 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:37.119 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:37.119 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:37.119 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:37.119 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:37.119 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:37.119 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:37.119 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:37.119 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:37.119 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:37.119 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:37.119 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:37.119 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.121 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:37.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:37.123 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:37.123 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:37.123 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:37.123 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:37.123 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:37.123 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:37.123 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:37.123 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:37.123 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:37.123 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:37.123 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:37.123 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:37.123 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:37.123 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:37.123 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:37.123 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:37.123 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:37.123 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:37.123 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:37.123 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:37.123 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:37.123 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:37.123 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:37.123 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:37.123 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:37.123 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:37.123 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:37.123 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:37.123 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:37.123 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:37.123 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:37.123 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:37.123 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:37.123 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:37.123 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:37.123 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:37.123 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:37.123 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:37.123 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:37.123 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:37.123 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:37.123 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:37.123 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:37.123 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:37.123 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:37.123 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:37.123 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:37.123 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:37.123 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:37.123 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:37.123 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:37.123 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:37.123 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:37.123 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:37.123 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:37.124 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:37.124 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:37.124 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:37.124 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:37.124 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:37.124 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:37.124 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:37.124 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:37.124 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:37.124 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:37.124 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:37.124 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:37.124 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:37.124 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:37.124 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:37.124 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:37.124 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:37.124 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:37.124 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:37.124 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:37.124 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:37.124 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:37.124 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:37.124 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:37.124 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:37.124 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:37.124 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:37.124 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:37.124 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:37.124 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:37.124 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:37.124 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:37.124 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:37.124 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:37.124 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:37.124 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:37.124 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:37.124 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:37.124 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:37.124 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:37.124 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:37.124 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:37.124 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:37.124 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:37.124 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:37.124 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:37.124 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:37.124 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:37.124 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:37.124 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:37.124 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:37.124 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:37.124 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:37.124 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:37.124 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:37.124 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:37.124 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:37.124 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:37.124 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:37.124 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:37.124 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:37.124 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:37.124 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:37.124 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:37.124 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:37.124 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:37.124 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:37.124 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:37.124 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:37.124 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:37.124 20:04:48 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:37.124 20:04:48 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:37.124 00:03:37.124 real 1m23.352s 00:03:37.124 user 14m25.023s 00:03:37.124 sys 1m53.103s 00:03:37.124 20:04:48 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:37.124 20:04:48 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:37.124 ************************************ 00:03:37.124 END TEST build_native_dpdk 00:03:37.124 ************************************ 00:03:37.124 20:04:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:37.124 20:04:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:37.124 20:04:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:37.124 20:04:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:37.124 20:04:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:37.124 20:04:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:37.124 20:04:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:37.124 20:04:48 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:37.124 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:37.384 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:37.384 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:37.384 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:37.644 Using 'verbs' RDMA provider 00:03:48.565 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:58.566 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:58.566 Creating mk/config.mk...done. 00:03:58.566 Creating mk/cc.flags.mk...done. 00:03:58.566 Type 'make' to build. 00:03:58.566 20:05:09 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:58.566 20:05:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:58.566 20:05:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:58.566 20:05:09 -- common/autotest_common.sh@10 -- $ set +x 00:03:58.566 ************************************ 00:03:58.566 START TEST make 00:03:58.566 ************************************ 00:03:58.566 20:05:09 make -- common/autotest_common.sh@1129 -- $ make -j48 00:03:58.566 make[1]: Nothing to be done for 'all'. 00:03:59.527 The Meson build system 00:03:59.527 Version: 1.5.0 00:03:59.527 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:59.527 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:59.527 Build type: native build 00:03:59.527 Project name: libvfio-user 00:03:59.527 Project version: 0.0.1 00:03:59.527 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:59.527 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:59.527 Host machine cpu family: x86_64 00:03:59.527 Host machine cpu: x86_64 00:03:59.527 Run-time dependency threads found: YES 00:03:59.527 Library dl found: YES 00:03:59.527 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:59.527 Run-time dependency json-c found: YES 0.17 00:03:59.527 Run-time dependency cmocka found: YES 1.1.7 00:03:59.527 Program pytest-3 found: NO 00:03:59.527 Program flake8 found: NO 00:03:59.527 Program misspell-fixer found: NO 00:03:59.527 Program restructuredtext-lint found: NO 00:03:59.527 Program valgrind found: YES (/usr/bin/valgrind) 00:03:59.527 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:59.527 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:59.527 Compiler for C supports arguments -Wwrite-strings: YES 00:03:59.527 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:59.527 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:59.527 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:59.527 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:59.527 Build targets in project: 8 00:03:59.527 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:59.527 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:59.527 00:03:59.527 libvfio-user 0.0.1 00:03:59.527 00:03:59.527 User defined options 00:03:59.527 buildtype : debug 00:03:59.527 default_library: shared 00:03:59.527 libdir : /usr/local/lib 00:03:59.527 00:03:59.527 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:00.485 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:00.749 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:00.749 [2/37] Compiling C object samples/null.p/null.c.o 00:04:00.749 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:00.749 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:00.749 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:00.749 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:00.749 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:00.749 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:00.749 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:00.749 [10/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:00.749 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:00.749 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:00.749 [13/37] Compiling C object samples/server.p/server.c.o 00:04:00.749 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:00.749 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:00.749 [16/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:00.749 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:00.749 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:00.749 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:00.749 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:00.749 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:00.749 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:00.749 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:00.749 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:01.010 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:01.010 [26/37] Compiling C object samples/client.p/client.c.o 00:04:01.010 [27/37] Linking target samples/client 00:04:01.010 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:01.010 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:01.010 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:04:01.272 [31/37] Linking target test/unit_tests 00:04:01.272 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:01.272 [33/37] Linking target samples/gpio-pci-idio-16 00:04:01.272 [34/37] Linking target samples/server 00:04:01.272 [35/37] Linking target samples/null 00:04:01.272 [36/37] Linking target samples/lspci 00:04:01.272 [37/37] Linking target samples/shadow_ioeventfd_server 00:04:01.536 INFO: autodetecting backend as ninja 00:04:01.536 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:01.536 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:02.482 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:02.482 ninja: no work to do. 00:04:41.205 CC lib/ut_mock/mock.o 00:04:41.205 CC lib/ut/ut.o 00:04:41.205 CC lib/log/log.o 00:04:41.205 CC lib/log/log_flags.o 00:04:41.205 CC lib/log/log_deprecated.o 00:04:41.205 LIB libspdk_ut.a 00:04:41.205 LIB libspdk_ut_mock.a 00:04:41.205 LIB libspdk_log.a 00:04:41.205 SO libspdk_ut.so.2.0 00:04:41.205 SO libspdk_ut_mock.so.6.0 00:04:41.205 SO libspdk_log.so.7.1 00:04:41.205 SYMLINK libspdk_ut_mock.so 00:04:41.205 SYMLINK libspdk_ut.so 00:04:41.205 SYMLINK libspdk_log.so 00:04:41.205 CC lib/ioat/ioat.o 00:04:41.205 CC lib/dma/dma.o 00:04:41.205 CXX lib/trace_parser/trace.o 00:04:41.205 CC lib/util/base64.o 00:04:41.205 CC lib/util/bit_array.o 00:04:41.205 CC lib/util/cpuset.o 00:04:41.205 CC lib/util/crc16.o 00:04:41.205 CC lib/util/crc32.o 00:04:41.205 CC lib/util/crc32c.o 00:04:41.205 CC lib/util/crc32_ieee.o 00:04:41.205 CC lib/util/crc64.o 00:04:41.205 CC lib/util/dif.o 00:04:41.205 CC lib/util/fd.o 00:04:41.205 CC lib/util/fd_group.o 00:04:41.205 CC lib/util/file.o 00:04:41.205 CC lib/util/hexlify.o 00:04:41.205 CC lib/util/iov.o 00:04:41.205 CC lib/util/math.o 00:04:41.205 CC lib/util/net.o 00:04:41.205 CC lib/util/pipe.o 00:04:41.205 CC lib/util/string.o 00:04:41.205 CC lib/util/strerror_tls.o 00:04:41.205 CC lib/util/uuid.o 00:04:41.205 CC lib/util/xor.o 00:04:41.205 CC lib/util/md5.o 00:04:41.205 CC lib/util/zipf.o 00:04:41.205 CC lib/vfio_user/host/vfio_user_pci.o 00:04:41.205 CC lib/vfio_user/host/vfio_user.o 00:04:41.205 LIB libspdk_dma.a 00:04:41.205 SO libspdk_dma.so.5.0 00:04:41.205 LIB libspdk_ioat.a 00:04:41.205 SO libspdk_ioat.so.7.0 00:04:41.205 SYMLINK libspdk_dma.so 00:04:41.205 SYMLINK libspdk_ioat.so 00:04:41.205 LIB libspdk_vfio_user.a 00:04:41.205 SO libspdk_vfio_user.so.5.0 00:04:41.205 SYMLINK libspdk_vfio_user.so 00:04:41.205 LIB libspdk_util.a 00:04:41.205 SO libspdk_util.so.10.1 00:04:41.205 SYMLINK libspdk_util.so 00:04:41.205 CC lib/conf/conf.o 00:04:41.205 CC lib/rdma_utils/rdma_utils.o 00:04:41.205 CC lib/idxd/idxd.o 00:04:41.205 CC lib/json/json_parse.o 00:04:41.205 CC lib/vmd/vmd.o 00:04:41.205 CC lib/env_dpdk/env.o 00:04:41.205 CC lib/idxd/idxd_user.o 00:04:41.205 CC lib/vmd/led.o 00:04:41.205 CC lib/json/json_util.o 00:04:41.205 CC lib/env_dpdk/memory.o 00:04:41.205 CC lib/idxd/idxd_kernel.o 00:04:41.205 CC lib/json/json_write.o 00:04:41.205 CC lib/env_dpdk/pci.o 00:04:41.205 CC lib/env_dpdk/init.o 00:04:41.205 CC lib/env_dpdk/threads.o 00:04:41.205 CC lib/env_dpdk/pci_ioat.o 00:04:41.205 CC lib/env_dpdk/pci_virtio.o 00:04:41.205 CC lib/env_dpdk/pci_vmd.o 00:04:41.205 CC lib/env_dpdk/pci_idxd.o 00:04:41.205 CC lib/env_dpdk/pci_event.o 00:04:41.205 CC lib/env_dpdk/sigbus_handler.o 00:04:41.205 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:41.205 CC lib/env_dpdk/pci_dpdk.o 00:04:41.205 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:41.205 LIB libspdk_conf.a 00:04:41.205 SO libspdk_conf.so.6.0 00:04:41.205 LIB libspdk_rdma_utils.a 00:04:41.205 SO libspdk_rdma_utils.so.1.0 00:04:41.205 SYMLINK libspdk_conf.so 00:04:41.205 LIB libspdk_json.a 00:04:41.205 SO libspdk_json.so.6.0 00:04:41.205 SYMLINK libspdk_rdma_utils.so 00:04:41.205 SYMLINK libspdk_json.so 00:04:41.205 CC lib/rdma_provider/common.o 00:04:41.205 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:41.205 CC lib/jsonrpc/jsonrpc_server.o 00:04:41.205 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:41.205 CC lib/jsonrpc/jsonrpc_client.o 00:04:41.205 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:41.205 LIB libspdk_idxd.a 00:04:41.205 SO libspdk_idxd.so.12.1 00:04:41.205 SYMLINK libspdk_idxd.so 00:04:41.205 LIB libspdk_vmd.a 00:04:41.205 SO libspdk_vmd.so.6.0 00:04:41.205 LIB libspdk_rdma_provider.a 00:04:41.205 SYMLINK libspdk_vmd.so 00:04:41.205 SO libspdk_rdma_provider.so.7.0 00:04:41.205 LIB libspdk_jsonrpc.a 00:04:41.205 SYMLINK libspdk_rdma_provider.so 00:04:41.205 LIB libspdk_trace_parser.a 00:04:41.205 SO libspdk_trace_parser.so.6.0 00:04:41.205 SO libspdk_jsonrpc.so.6.0 00:04:41.205 SYMLINK libspdk_jsonrpc.so 00:04:41.205 SYMLINK libspdk_trace_parser.so 00:04:41.205 CC lib/rpc/rpc.o 00:04:41.205 LIB libspdk_rpc.a 00:04:41.205 SO libspdk_rpc.so.6.0 00:04:41.205 SYMLINK libspdk_rpc.so 00:04:41.464 CC lib/trace/trace.o 00:04:41.464 CC lib/trace/trace_flags.o 00:04:41.464 CC lib/keyring/keyring.o 00:04:41.464 CC lib/trace/trace_rpc.o 00:04:41.464 CC lib/keyring/keyring_rpc.o 00:04:41.464 CC lib/notify/notify.o 00:04:41.464 CC lib/notify/notify_rpc.o 00:04:41.464 LIB libspdk_notify.a 00:04:41.464 SO libspdk_notify.so.6.0 00:04:41.464 SYMLINK libspdk_notify.so 00:04:41.464 LIB libspdk_keyring.a 00:04:41.722 LIB libspdk_trace.a 00:04:41.722 SO libspdk_keyring.so.2.0 00:04:41.722 SO libspdk_trace.so.11.0 00:04:41.722 SYMLINK libspdk_keyring.so 00:04:41.722 SYMLINK libspdk_trace.so 00:04:41.722 LIB libspdk_env_dpdk.a 00:04:41.981 CC lib/sock/sock.o 00:04:41.981 CC lib/sock/sock_rpc.o 00:04:41.981 CC lib/thread/thread.o 00:04:41.981 CC lib/thread/iobuf.o 00:04:41.981 SO libspdk_env_dpdk.so.15.1 00:04:41.981 SYMLINK libspdk_env_dpdk.so 00:04:42.240 LIB libspdk_sock.a 00:04:42.240 SO libspdk_sock.so.10.0 00:04:42.240 SYMLINK libspdk_sock.so 00:04:42.500 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:42.500 CC lib/nvme/nvme_ctrlr.o 00:04:42.500 CC lib/nvme/nvme_fabric.o 00:04:42.500 CC lib/nvme/nvme_ns_cmd.o 00:04:42.500 CC lib/nvme/nvme_ns.o 00:04:42.500 CC lib/nvme/nvme_pcie_common.o 00:04:42.500 CC lib/nvme/nvme_pcie.o 00:04:42.500 CC lib/nvme/nvme_qpair.o 00:04:42.500 CC lib/nvme/nvme.o 00:04:42.500 CC lib/nvme/nvme_quirks.o 00:04:42.500 CC lib/nvme/nvme_transport.o 00:04:42.500 CC lib/nvme/nvme_discovery.o 00:04:42.500 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:42.500 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:42.500 CC lib/nvme/nvme_tcp.o 00:04:42.500 CC lib/nvme/nvme_opal.o 00:04:42.500 CC lib/nvme/nvme_io_msg.o 00:04:42.500 CC lib/nvme/nvme_poll_group.o 00:04:42.500 CC lib/nvme/nvme_zns.o 00:04:42.500 CC lib/nvme/nvme_stubs.o 00:04:42.500 CC lib/nvme/nvme_auth.o 00:04:42.500 CC lib/nvme/nvme_cuse.o 00:04:42.500 CC lib/nvme/nvme_vfio_user.o 00:04:42.500 CC lib/nvme/nvme_rdma.o 00:04:43.437 LIB libspdk_thread.a 00:04:43.437 SO libspdk_thread.so.11.0 00:04:43.437 SYMLINK libspdk_thread.so 00:04:43.697 CC lib/accel/accel.o 00:04:43.697 CC lib/accel/accel_rpc.o 00:04:43.697 CC lib/vfu_tgt/tgt_endpoint.o 00:04:43.697 CC lib/vfu_tgt/tgt_rpc.o 00:04:43.697 CC lib/accel/accel_sw.o 00:04:43.697 CC lib/blob/blobstore.o 00:04:43.697 CC lib/virtio/virtio.o 00:04:43.697 CC lib/virtio/virtio_vhost_user.o 00:04:43.697 CC lib/blob/request.o 00:04:43.697 CC lib/virtio/virtio_vfio_user.o 00:04:43.697 CC lib/blob/zeroes.o 00:04:43.697 CC lib/virtio/virtio_pci.o 00:04:43.697 CC lib/blob/blob_bs_dev.o 00:04:43.697 CC lib/init/json_config.o 00:04:43.697 CC lib/fsdev/fsdev.o 00:04:43.697 CC lib/init/subsystem.o 00:04:43.697 CC lib/fsdev/fsdev_io.o 00:04:43.697 CC lib/fsdev/fsdev_rpc.o 00:04:43.697 CC lib/init/subsystem_rpc.o 00:04:43.697 CC lib/init/rpc.o 00:04:43.957 LIB libspdk_init.a 00:04:43.957 SO libspdk_init.so.6.0 00:04:43.957 LIB libspdk_virtio.a 00:04:43.957 LIB libspdk_vfu_tgt.a 00:04:43.957 SYMLINK libspdk_init.so 00:04:44.216 SO libspdk_virtio.so.7.0 00:04:44.216 SO libspdk_vfu_tgt.so.3.0 00:04:44.216 SYMLINK libspdk_virtio.so 00:04:44.216 SYMLINK libspdk_vfu_tgt.so 00:04:44.216 CC lib/event/app.o 00:04:44.216 CC lib/event/reactor.o 00:04:44.216 CC lib/event/log_rpc.o 00:04:44.216 CC lib/event/app_rpc.o 00:04:44.216 CC lib/event/scheduler_static.o 00:04:44.475 LIB libspdk_fsdev.a 00:04:44.475 SO libspdk_fsdev.so.2.0 00:04:44.475 SYMLINK libspdk_fsdev.so 00:04:44.735 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:44.735 LIB libspdk_event.a 00:04:44.735 SO libspdk_event.so.14.0 00:04:44.735 SYMLINK libspdk_event.so 00:04:44.994 LIB libspdk_accel.a 00:04:44.994 SO libspdk_accel.so.16.0 00:04:44.994 SYMLINK libspdk_accel.so 00:04:44.994 LIB libspdk_nvme.a 00:04:44.994 SO libspdk_nvme.so.15.0 00:04:45.253 CC lib/bdev/bdev.o 00:04:45.253 CC lib/bdev/bdev_rpc.o 00:04:45.253 CC lib/bdev/bdev_zone.o 00:04:45.253 CC lib/bdev/part.o 00:04:45.253 CC lib/bdev/scsi_nvme.o 00:04:45.253 LIB libspdk_fuse_dispatcher.a 00:04:45.253 SYMLINK libspdk_nvme.so 00:04:45.253 SO libspdk_fuse_dispatcher.so.1.0 00:04:45.513 SYMLINK libspdk_fuse_dispatcher.so 00:04:46.892 LIB libspdk_blob.a 00:04:46.892 SO libspdk_blob.so.11.0 00:04:46.892 SYMLINK libspdk_blob.so 00:04:47.151 CC lib/blobfs/blobfs.o 00:04:47.151 CC lib/blobfs/tree.o 00:04:47.151 CC lib/lvol/lvol.o 00:04:47.718 LIB libspdk_bdev.a 00:04:47.718 SO libspdk_bdev.so.17.0 00:04:47.718 SYMLINK libspdk_bdev.so 00:04:47.987 LIB libspdk_blobfs.a 00:04:47.987 SO libspdk_blobfs.so.10.0 00:04:47.987 SYMLINK libspdk_blobfs.so 00:04:47.987 CC lib/nbd/nbd.o 00:04:47.987 CC lib/nbd/nbd_rpc.o 00:04:47.987 CC lib/ublk/ublk.o 00:04:47.987 CC lib/ublk/ublk_rpc.o 00:04:47.987 CC lib/scsi/dev.o 00:04:47.987 CC lib/scsi/lun.o 00:04:47.987 CC lib/scsi/port.o 00:04:47.987 CC lib/nvmf/ctrlr.o 00:04:47.987 CC lib/nvmf/ctrlr_discovery.o 00:04:47.987 CC lib/scsi/scsi.o 00:04:47.987 CC lib/scsi/scsi_bdev.o 00:04:47.987 CC lib/nvmf/ctrlr_bdev.o 00:04:47.987 CC lib/ftl/ftl_core.o 00:04:47.987 CC lib/ftl/ftl_init.o 00:04:47.987 CC lib/nvmf/subsystem.o 00:04:47.987 CC lib/scsi/scsi_pr.o 00:04:47.987 CC lib/ftl/ftl_layout.o 00:04:47.987 CC lib/nvmf/nvmf.o 00:04:47.987 CC lib/scsi/scsi_rpc.o 00:04:47.987 CC lib/ftl/ftl_debug.o 00:04:47.987 CC lib/scsi/task.o 00:04:47.987 CC lib/nvmf/transport.o 00:04:47.987 CC lib/nvmf/nvmf_rpc.o 00:04:47.987 CC lib/nvmf/tcp.o 00:04:47.987 CC lib/ftl/ftl_io.o 00:04:47.987 CC lib/ftl/ftl_sb.o 00:04:47.987 CC lib/ftl/ftl_l2p.o 00:04:47.987 CC lib/nvmf/stubs.o 00:04:47.987 CC lib/nvmf/mdns_server.o 00:04:47.987 CC lib/ftl/ftl_l2p_flat.o 00:04:47.987 CC lib/nvmf/vfio_user.o 00:04:47.987 CC lib/ftl/ftl_nv_cache.o 00:04:47.987 CC lib/nvmf/rdma.o 00:04:47.987 CC lib/ftl/ftl_band.o 00:04:47.987 CC lib/ftl/ftl_band_ops.o 00:04:47.987 CC lib/nvmf/auth.o 00:04:47.987 CC lib/ftl/ftl_writer.o 00:04:47.987 CC lib/ftl/ftl_rq.o 00:04:47.987 CC lib/ftl/ftl_reloc.o 00:04:47.987 CC lib/ftl/ftl_l2p_cache.o 00:04:47.987 CC lib/ftl/ftl_p2l.o 00:04:47.987 CC lib/ftl/ftl_p2l_log.o 00:04:47.987 CC lib/ftl/mngt/ftl_mngt.o 00:04:47.987 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:47.987 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:47.987 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:47.987 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:47.987 LIB libspdk_lvol.a 00:04:47.987 SO libspdk_lvol.so.10.0 00:04:48.252 SYMLINK libspdk_lvol.so 00:04:48.252 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:48.516 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:48.516 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:48.516 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:48.516 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:48.516 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:48.516 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:48.516 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:48.516 CC lib/ftl/utils/ftl_conf.o 00:04:48.516 CC lib/ftl/utils/ftl_md.o 00:04:48.516 CC lib/ftl/utils/ftl_mempool.o 00:04:48.516 CC lib/ftl/utils/ftl_bitmap.o 00:04:48.516 CC lib/ftl/utils/ftl_property.o 00:04:48.516 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:48.516 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:48.516 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:48.516 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:48.516 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:48.516 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:48.776 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:48.776 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:48.776 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:48.776 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:48.776 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:48.776 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:48.776 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:48.776 CC lib/ftl/base/ftl_base_dev.o 00:04:48.776 CC lib/ftl/base/ftl_base_bdev.o 00:04:48.776 CC lib/ftl/ftl_trace.o 00:04:48.776 LIB libspdk_nbd.a 00:04:49.035 SO libspdk_nbd.so.7.0 00:04:49.035 LIB libspdk_scsi.a 00:04:49.035 SYMLINK libspdk_nbd.so 00:04:49.035 SO libspdk_scsi.so.9.0 00:04:49.035 SYMLINK libspdk_scsi.so 00:04:49.035 LIB libspdk_ublk.a 00:04:49.035 SO libspdk_ublk.so.3.0 00:04:49.294 SYMLINK libspdk_ublk.so 00:04:49.294 CC lib/iscsi/conn.o 00:04:49.294 CC lib/vhost/vhost.o 00:04:49.294 CC lib/iscsi/init_grp.o 00:04:49.294 CC lib/vhost/vhost_rpc.o 00:04:49.294 CC lib/iscsi/iscsi.o 00:04:49.294 CC lib/vhost/vhost_scsi.o 00:04:49.294 CC lib/vhost/vhost_blk.o 00:04:49.294 CC lib/iscsi/param.o 00:04:49.294 CC lib/vhost/rte_vhost_user.o 00:04:49.294 CC lib/iscsi/portal_grp.o 00:04:49.294 CC lib/iscsi/tgt_node.o 00:04:49.294 CC lib/iscsi/iscsi_subsystem.o 00:04:49.294 CC lib/iscsi/iscsi_rpc.o 00:04:49.294 CC lib/iscsi/task.o 00:04:49.554 LIB libspdk_ftl.a 00:04:49.812 SO libspdk_ftl.so.9.0 00:04:50.071 SYMLINK libspdk_ftl.so 00:04:50.640 LIB libspdk_vhost.a 00:04:50.640 SO libspdk_vhost.so.8.0 00:04:50.640 SYMLINK libspdk_vhost.so 00:04:50.640 LIB libspdk_iscsi.a 00:04:50.902 LIB libspdk_nvmf.a 00:04:50.902 SO libspdk_iscsi.so.8.0 00:04:50.902 SO libspdk_nvmf.so.20.0 00:04:50.902 SYMLINK libspdk_iscsi.so 00:04:51.162 SYMLINK libspdk_nvmf.so 00:04:51.422 CC module/vfu_device/vfu_virtio.o 00:04:51.422 CC module/vfu_device/vfu_virtio_blk.o 00:04:51.422 CC module/vfu_device/vfu_virtio_scsi.o 00:04:51.422 CC module/env_dpdk/env_dpdk_rpc.o 00:04:51.422 CC module/vfu_device/vfu_virtio_rpc.o 00:04:51.422 CC module/vfu_device/vfu_virtio_fs.o 00:04:51.422 CC module/keyring/file/keyring.o 00:04:51.422 CC module/accel/error/accel_error.o 00:04:51.422 CC module/keyring/file/keyring_rpc.o 00:04:51.422 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:51.422 CC module/accel/error/accel_error_rpc.o 00:04:51.422 CC module/scheduler/gscheduler/gscheduler.o 00:04:51.422 CC module/blob/bdev/blob_bdev.o 00:04:51.422 CC module/keyring/linux/keyring.o 00:04:51.422 CC module/sock/posix/posix.o 00:04:51.422 CC module/keyring/linux/keyring_rpc.o 00:04:51.422 CC module/accel/dsa/accel_dsa.o 00:04:51.422 CC module/fsdev/aio/fsdev_aio.o 00:04:51.422 CC module/accel/ioat/accel_ioat.o 00:04:51.422 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:51.422 CC module/accel/dsa/accel_dsa_rpc.o 00:04:51.422 CC module/accel/ioat/accel_ioat_rpc.o 00:04:51.422 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:51.422 CC module/accel/iaa/accel_iaa.o 00:04:51.422 CC module/fsdev/aio/linux_aio_mgr.o 00:04:51.422 CC module/accel/iaa/accel_iaa_rpc.o 00:04:51.422 LIB libspdk_env_dpdk_rpc.a 00:04:51.422 SO libspdk_env_dpdk_rpc.so.6.0 00:04:51.681 SYMLINK libspdk_env_dpdk_rpc.so 00:04:51.681 LIB libspdk_scheduler_gscheduler.a 00:04:51.681 LIB libspdk_scheduler_dpdk_governor.a 00:04:51.681 SO libspdk_scheduler_gscheduler.so.4.0 00:04:51.681 LIB libspdk_keyring_linux.a 00:04:51.681 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:51.681 LIB libspdk_accel_ioat.a 00:04:51.681 SO libspdk_keyring_linux.so.1.0 00:04:51.681 LIB libspdk_accel_iaa.a 00:04:51.681 SO libspdk_accel_ioat.so.6.0 00:04:51.681 SYMLINK libspdk_scheduler_gscheduler.so 00:04:51.681 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:51.681 LIB libspdk_keyring_file.a 00:04:51.681 SO libspdk_accel_iaa.so.3.0 00:04:51.681 SO libspdk_keyring_file.so.2.0 00:04:51.681 SYMLINK libspdk_keyring_linux.so 00:04:51.681 SYMLINK libspdk_accel_ioat.so 00:04:51.681 LIB libspdk_scheduler_dynamic.a 00:04:51.681 LIB libspdk_accel_dsa.a 00:04:51.681 LIB libspdk_accel_error.a 00:04:51.681 SYMLINK libspdk_accel_iaa.so 00:04:51.681 SO libspdk_scheduler_dynamic.so.4.0 00:04:51.681 SYMLINK libspdk_keyring_file.so 00:04:51.681 SO libspdk_accel_dsa.so.5.0 00:04:51.681 SO libspdk_accel_error.so.2.0 00:04:51.681 LIB libspdk_blob_bdev.a 00:04:51.681 SO libspdk_blob_bdev.so.11.0 00:04:51.681 SYMLINK libspdk_scheduler_dynamic.so 00:04:51.941 SYMLINK libspdk_accel_dsa.so 00:04:51.941 SYMLINK libspdk_accel_error.so 00:04:51.941 SYMLINK libspdk_blob_bdev.so 00:04:51.941 LIB libspdk_vfu_device.a 00:04:51.941 SO libspdk_vfu_device.so.3.0 00:04:52.201 CC module/bdev/null/bdev_null.o 00:04:52.201 CC module/bdev/null/bdev_null_rpc.o 00:04:52.201 CC module/bdev/malloc/bdev_malloc.o 00:04:52.201 CC module/blobfs/bdev/blobfs_bdev.o 00:04:52.201 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:52.201 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:52.201 CC module/bdev/delay/vbdev_delay.o 00:04:52.201 CC module/bdev/gpt/gpt.o 00:04:52.201 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:52.201 CC module/bdev/lvol/vbdev_lvol.o 00:04:52.201 CC module/bdev/aio/bdev_aio.o 00:04:52.201 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:52.201 CC module/bdev/gpt/vbdev_gpt.o 00:04:52.201 CC module/bdev/error/vbdev_error.o 00:04:52.201 CC module/bdev/passthru/vbdev_passthru.o 00:04:52.201 CC module/bdev/aio/bdev_aio_rpc.o 00:04:52.201 CC module/bdev/error/vbdev_error_rpc.o 00:04:52.201 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:52.201 CC module/bdev/raid/bdev_raid.o 00:04:52.201 CC module/bdev/split/vbdev_split.o 00:04:52.201 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:52.201 CC module/bdev/split/vbdev_split_rpc.o 00:04:52.201 CC module/bdev/raid/bdev_raid_rpc.o 00:04:52.201 CC module/bdev/raid/bdev_raid_sb.o 00:04:52.201 CC module/bdev/iscsi/bdev_iscsi.o 00:04:52.201 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:52.201 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:52.201 CC module/bdev/raid/raid0.o 00:04:52.201 CC module/bdev/raid/raid1.o 00:04:52.201 CC module/bdev/nvme/bdev_nvme.o 00:04:52.201 CC module/bdev/raid/concat.o 00:04:52.201 CC module/bdev/ftl/bdev_ftl.o 00:04:52.201 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:52.201 CC module/bdev/nvme/nvme_rpc.o 00:04:52.201 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:52.201 CC module/bdev/nvme/bdev_mdns_client.o 00:04:52.201 CC module/bdev/nvme/vbdev_opal.o 00:04:52.201 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:52.201 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:52.201 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:52.201 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:52.201 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:52.201 SYMLINK libspdk_vfu_device.so 00:04:52.201 LIB libspdk_fsdev_aio.a 00:04:52.201 SO libspdk_fsdev_aio.so.1.0 00:04:52.201 LIB libspdk_sock_posix.a 00:04:52.461 SO libspdk_sock_posix.so.6.0 00:04:52.461 SYMLINK libspdk_fsdev_aio.so 00:04:52.461 SYMLINK libspdk_sock_posix.so 00:04:52.461 LIB libspdk_blobfs_bdev.a 00:04:52.461 SO libspdk_blobfs_bdev.so.6.0 00:04:52.461 LIB libspdk_bdev_split.a 00:04:52.461 LIB libspdk_bdev_gpt.a 00:04:52.461 LIB libspdk_bdev_null.a 00:04:52.461 SYMLINK libspdk_blobfs_bdev.so 00:04:52.720 SO libspdk_bdev_split.so.6.0 00:04:52.720 SO libspdk_bdev_gpt.so.6.0 00:04:52.720 SO libspdk_bdev_null.so.6.0 00:04:52.720 LIB libspdk_bdev_zone_block.a 00:04:52.720 LIB libspdk_bdev_passthru.a 00:04:52.720 LIB libspdk_bdev_error.a 00:04:52.720 SO libspdk_bdev_zone_block.so.6.0 00:04:52.720 SO libspdk_bdev_passthru.so.6.0 00:04:52.720 SO libspdk_bdev_error.so.6.0 00:04:52.720 SYMLINK libspdk_bdev_gpt.so 00:04:52.720 SYMLINK libspdk_bdev_split.so 00:04:52.720 SYMLINK libspdk_bdev_null.so 00:04:52.720 LIB libspdk_bdev_ftl.a 00:04:52.720 LIB libspdk_bdev_aio.a 00:04:52.720 LIB libspdk_bdev_delay.a 00:04:52.720 LIB libspdk_bdev_iscsi.a 00:04:52.720 SYMLINK libspdk_bdev_zone_block.so 00:04:52.720 SYMLINK libspdk_bdev_passthru.so 00:04:52.720 SO libspdk_bdev_ftl.so.6.0 00:04:52.720 SO libspdk_bdev_aio.so.6.0 00:04:52.720 SO libspdk_bdev_delay.so.6.0 00:04:52.720 SO libspdk_bdev_iscsi.so.6.0 00:04:52.720 SYMLINK libspdk_bdev_error.so 00:04:52.720 LIB libspdk_bdev_malloc.a 00:04:52.720 SYMLINK libspdk_bdev_ftl.so 00:04:52.720 SYMLINK libspdk_bdev_aio.so 00:04:52.720 SYMLINK libspdk_bdev_iscsi.so 00:04:52.720 SYMLINK libspdk_bdev_delay.so 00:04:52.720 SO libspdk_bdev_malloc.so.6.0 00:04:52.720 SYMLINK libspdk_bdev_malloc.so 00:04:52.981 LIB libspdk_bdev_virtio.a 00:04:52.981 SO libspdk_bdev_virtio.so.6.0 00:04:52.981 LIB libspdk_bdev_lvol.a 00:04:52.981 SO libspdk_bdev_lvol.so.6.0 00:04:52.981 SYMLINK libspdk_bdev_virtio.so 00:04:52.981 SYMLINK libspdk_bdev_lvol.so 00:04:53.242 LIB libspdk_bdev_raid.a 00:04:53.501 SO libspdk_bdev_raid.so.6.0 00:04:53.501 SYMLINK libspdk_bdev_raid.so 00:04:54.885 LIB libspdk_bdev_nvme.a 00:04:54.885 SO libspdk_bdev_nvme.so.7.1 00:04:54.885 SYMLINK libspdk_bdev_nvme.so 00:04:55.453 CC module/event/subsystems/iobuf/iobuf.o 00:04:55.453 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:55.453 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:55.453 CC module/event/subsystems/sock/sock.o 00:04:55.453 CC module/event/subsystems/vmd/vmd.o 00:04:55.453 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:55.453 CC module/event/subsystems/keyring/keyring.o 00:04:55.453 CC module/event/subsystems/scheduler/scheduler.o 00:04:55.453 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:55.453 CC module/event/subsystems/fsdev/fsdev.o 00:04:55.453 LIB libspdk_event_keyring.a 00:04:55.453 LIB libspdk_event_vhost_blk.a 00:04:55.453 LIB libspdk_event_fsdev.a 00:04:55.453 LIB libspdk_event_vfu_tgt.a 00:04:55.453 LIB libspdk_event_scheduler.a 00:04:55.453 LIB libspdk_event_vmd.a 00:04:55.453 LIB libspdk_event_sock.a 00:04:55.453 SO libspdk_event_keyring.so.1.0 00:04:55.453 LIB libspdk_event_iobuf.a 00:04:55.453 SO libspdk_event_vhost_blk.so.3.0 00:04:55.453 SO libspdk_event_fsdev.so.1.0 00:04:55.453 SO libspdk_event_vfu_tgt.so.3.0 00:04:55.453 SO libspdk_event_scheduler.so.4.0 00:04:55.453 SO libspdk_event_vmd.so.6.0 00:04:55.453 SO libspdk_event_sock.so.5.0 00:04:55.453 SO libspdk_event_iobuf.so.3.0 00:04:55.712 SYMLINK libspdk_event_keyring.so 00:04:55.712 SYMLINK libspdk_event_vhost_blk.so 00:04:55.712 SYMLINK libspdk_event_fsdev.so 00:04:55.712 SYMLINK libspdk_event_scheduler.so 00:04:55.712 SYMLINK libspdk_event_vfu_tgt.so 00:04:55.712 SYMLINK libspdk_event_sock.so 00:04:55.712 SYMLINK libspdk_event_vmd.so 00:04:55.712 SYMLINK libspdk_event_iobuf.so 00:04:55.712 CC module/event/subsystems/accel/accel.o 00:04:55.971 LIB libspdk_event_accel.a 00:04:55.971 SO libspdk_event_accel.so.6.0 00:04:55.971 SYMLINK libspdk_event_accel.so 00:04:56.229 CC module/event/subsystems/bdev/bdev.o 00:04:56.488 LIB libspdk_event_bdev.a 00:04:56.488 SO libspdk_event_bdev.so.6.0 00:04:56.488 SYMLINK libspdk_event_bdev.so 00:04:56.746 CC module/event/subsystems/ublk/ublk.o 00:04:56.746 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:56.746 CC module/event/subsystems/scsi/scsi.o 00:04:56.746 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:56.746 CC module/event/subsystems/nbd/nbd.o 00:04:56.746 LIB libspdk_event_nbd.a 00:04:56.746 LIB libspdk_event_ublk.a 00:04:56.746 LIB libspdk_event_scsi.a 00:04:56.746 SO libspdk_event_ublk.so.3.0 00:04:56.746 SO libspdk_event_nbd.so.6.0 00:04:56.746 SO libspdk_event_scsi.so.6.0 00:04:56.746 SYMLINK libspdk_event_ublk.so 00:04:56.746 SYMLINK libspdk_event_nbd.so 00:04:57.005 SYMLINK libspdk_event_scsi.so 00:04:57.005 LIB libspdk_event_nvmf.a 00:04:57.005 SO libspdk_event_nvmf.so.6.0 00:04:57.005 SYMLINK libspdk_event_nvmf.so 00:04:57.005 CC module/event/subsystems/iscsi/iscsi.o 00:04:57.005 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:57.263 LIB libspdk_event_vhost_scsi.a 00:04:57.263 SO libspdk_event_vhost_scsi.so.3.0 00:04:57.263 LIB libspdk_event_iscsi.a 00:04:57.263 SO libspdk_event_iscsi.so.6.0 00:04:57.263 SYMLINK libspdk_event_vhost_scsi.so 00:04:57.263 SYMLINK libspdk_event_iscsi.so 00:04:57.522 SO libspdk.so.6.0 00:04:57.522 SYMLINK libspdk.so 00:04:57.522 CC app/spdk_top/spdk_top.o 00:04:57.522 CXX app/trace/trace.o 00:04:57.522 CC app/spdk_nvme_discover/discovery_aer.o 00:04:57.522 CC app/spdk_lspci/spdk_lspci.o 00:04:57.522 CC app/spdk_nvme_identify/identify.o 00:04:57.522 TEST_HEADER include/spdk/accel_module.h 00:04:57.522 TEST_HEADER include/spdk/accel.h 00:04:57.522 CC test/rpc_client/rpc_client_test.o 00:04:57.522 TEST_HEADER include/spdk/assert.h 00:04:57.522 CC app/spdk_nvme_perf/perf.o 00:04:57.522 TEST_HEADER include/spdk/barrier.h 00:04:57.522 TEST_HEADER include/spdk/base64.h 00:04:57.522 TEST_HEADER include/spdk/bdev.h 00:04:57.522 TEST_HEADER include/spdk/bdev_module.h 00:04:57.522 TEST_HEADER include/spdk/bdev_zone.h 00:04:57.522 TEST_HEADER include/spdk/bit_array.h 00:04:57.522 TEST_HEADER include/spdk/bit_pool.h 00:04:57.522 CC app/trace_record/trace_record.o 00:04:57.522 TEST_HEADER include/spdk/blob_bdev.h 00:04:57.522 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:57.522 TEST_HEADER include/spdk/blobfs.h 00:04:57.522 TEST_HEADER include/spdk/blob.h 00:04:57.522 TEST_HEADER include/spdk/conf.h 00:04:57.522 TEST_HEADER include/spdk/config.h 00:04:57.522 TEST_HEADER include/spdk/cpuset.h 00:04:57.522 TEST_HEADER include/spdk/crc16.h 00:04:57.522 TEST_HEADER include/spdk/crc32.h 00:04:57.522 TEST_HEADER include/spdk/crc64.h 00:04:57.522 TEST_HEADER include/spdk/dif.h 00:04:57.522 TEST_HEADER include/spdk/dma.h 00:04:57.522 TEST_HEADER include/spdk/endian.h 00:04:57.522 TEST_HEADER include/spdk/env_dpdk.h 00:04:57.522 TEST_HEADER include/spdk/env.h 00:04:57.522 TEST_HEADER include/spdk/fd_group.h 00:04:57.522 TEST_HEADER include/spdk/event.h 00:04:57.522 TEST_HEADER include/spdk/fd.h 00:04:57.522 TEST_HEADER include/spdk/file.h 00:04:57.522 TEST_HEADER include/spdk/fsdev_module.h 00:04:57.522 TEST_HEADER include/spdk/fsdev.h 00:04:57.522 TEST_HEADER include/spdk/ftl.h 00:04:57.522 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:57.522 TEST_HEADER include/spdk/gpt_spec.h 00:04:57.522 TEST_HEADER include/spdk/hexlify.h 00:04:57.522 TEST_HEADER include/spdk/histogram_data.h 00:04:57.522 TEST_HEADER include/spdk/idxd.h 00:04:57.522 TEST_HEADER include/spdk/idxd_spec.h 00:04:57.522 TEST_HEADER include/spdk/init.h 00:04:57.522 TEST_HEADER include/spdk/ioat.h 00:04:57.522 TEST_HEADER include/spdk/ioat_spec.h 00:04:57.522 TEST_HEADER include/spdk/iscsi_spec.h 00:04:57.522 TEST_HEADER include/spdk/jsonrpc.h 00:04:57.522 TEST_HEADER include/spdk/json.h 00:04:57.522 TEST_HEADER include/spdk/keyring.h 00:04:57.522 TEST_HEADER include/spdk/keyring_module.h 00:04:57.522 TEST_HEADER include/spdk/likely.h 00:04:57.522 TEST_HEADER include/spdk/log.h 00:04:57.522 TEST_HEADER include/spdk/lvol.h 00:04:57.522 TEST_HEADER include/spdk/md5.h 00:04:57.522 TEST_HEADER include/spdk/memory.h 00:04:57.522 TEST_HEADER include/spdk/mmio.h 00:04:57.522 TEST_HEADER include/spdk/nbd.h 00:04:57.522 TEST_HEADER include/spdk/net.h 00:04:57.522 TEST_HEADER include/spdk/notify.h 00:04:57.522 TEST_HEADER include/spdk/nvme.h 00:04:57.522 TEST_HEADER include/spdk/nvme_intel.h 00:04:57.522 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:57.522 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:57.522 TEST_HEADER include/spdk/nvme_spec.h 00:04:57.522 TEST_HEADER include/spdk/nvme_zns.h 00:04:57.523 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:57.523 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:57.523 TEST_HEADER include/spdk/nvmf.h 00:04:57.523 TEST_HEADER include/spdk/nvmf_spec.h 00:04:57.523 TEST_HEADER include/spdk/nvmf_transport.h 00:04:57.523 TEST_HEADER include/spdk/opal.h 00:04:57.523 TEST_HEADER include/spdk/pci_ids.h 00:04:57.523 TEST_HEADER include/spdk/opal_spec.h 00:04:57.523 TEST_HEADER include/spdk/pipe.h 00:04:57.523 TEST_HEADER include/spdk/queue.h 00:04:57.523 TEST_HEADER include/spdk/rpc.h 00:04:57.523 TEST_HEADER include/spdk/reduce.h 00:04:57.523 TEST_HEADER include/spdk/scheduler.h 00:04:57.523 TEST_HEADER include/spdk/scsi.h 00:04:57.523 TEST_HEADER include/spdk/scsi_spec.h 00:04:57.523 TEST_HEADER include/spdk/stdinc.h 00:04:57.523 TEST_HEADER include/spdk/sock.h 00:04:57.523 TEST_HEADER include/spdk/thread.h 00:04:57.523 TEST_HEADER include/spdk/string.h 00:04:57.523 TEST_HEADER include/spdk/trace.h 00:04:57.523 TEST_HEADER include/spdk/trace_parser.h 00:04:57.523 TEST_HEADER include/spdk/tree.h 00:04:57.523 TEST_HEADER include/spdk/ublk.h 00:04:57.790 TEST_HEADER include/spdk/util.h 00:04:57.790 TEST_HEADER include/spdk/uuid.h 00:04:57.790 TEST_HEADER include/spdk/version.h 00:04:57.790 CC app/spdk_dd/spdk_dd.o 00:04:57.790 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:57.790 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:57.790 TEST_HEADER include/spdk/vhost.h 00:04:57.790 TEST_HEADER include/spdk/vmd.h 00:04:57.790 TEST_HEADER include/spdk/xor.h 00:04:57.790 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:57.790 TEST_HEADER include/spdk/zipf.h 00:04:57.790 CXX test/cpp_headers/accel.o 00:04:57.790 CXX test/cpp_headers/accel_module.o 00:04:57.790 CXX test/cpp_headers/assert.o 00:04:57.790 CXX test/cpp_headers/barrier.o 00:04:57.790 CXX test/cpp_headers/base64.o 00:04:57.790 CXX test/cpp_headers/bdev.o 00:04:57.790 CXX test/cpp_headers/bdev_module.o 00:04:57.790 CXX test/cpp_headers/bdev_zone.o 00:04:57.790 CXX test/cpp_headers/bit_array.o 00:04:57.790 CXX test/cpp_headers/bit_pool.o 00:04:57.790 CXX test/cpp_headers/blob_bdev.o 00:04:57.790 CXX test/cpp_headers/blobfs_bdev.o 00:04:57.790 CXX test/cpp_headers/blobfs.o 00:04:57.790 CXX test/cpp_headers/blob.o 00:04:57.790 CXX test/cpp_headers/conf.o 00:04:57.790 CXX test/cpp_headers/config.o 00:04:57.790 CXX test/cpp_headers/cpuset.o 00:04:57.790 CXX test/cpp_headers/crc16.o 00:04:57.790 CC app/iscsi_tgt/iscsi_tgt.o 00:04:57.790 CC app/nvmf_tgt/nvmf_main.o 00:04:57.790 CXX test/cpp_headers/crc32.o 00:04:57.790 CC app/spdk_tgt/spdk_tgt.o 00:04:57.790 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:57.790 CC test/env/pci/pci_ut.o 00:04:57.790 CC examples/util/zipf/zipf.o 00:04:57.790 CC test/app/jsoncat/jsoncat.o 00:04:57.790 CC test/app/histogram_perf/histogram_perf.o 00:04:57.790 CC test/app/stub/stub.o 00:04:57.790 CC test/env/memory/memory_ut.o 00:04:57.790 CC test/thread/poller_perf/poller_perf.o 00:04:57.790 CC app/fio/nvme/fio_plugin.o 00:04:57.790 CC examples/ioat/verify/verify.o 00:04:57.790 CC test/env/vtophys/vtophys.o 00:04:57.790 CC examples/ioat/perf/perf.o 00:04:57.790 CC test/app/bdev_svc/bdev_svc.o 00:04:57.790 CC test/dma/test_dma/test_dma.o 00:04:57.790 CC app/fio/bdev/fio_plugin.o 00:04:58.054 LINK spdk_lspci 00:04:58.054 CC test/env/mem_callbacks/mem_callbacks.o 00:04:58.054 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:58.054 LINK rpc_client_test 00:04:58.054 LINK spdk_nvme_discover 00:04:58.054 LINK jsoncat 00:04:58.054 LINK histogram_perf 00:04:58.055 LINK poller_perf 00:04:58.055 CXX test/cpp_headers/crc64.o 00:04:58.055 LINK interrupt_tgt 00:04:58.055 CXX test/cpp_headers/dif.o 00:04:58.055 LINK vtophys 00:04:58.055 CXX test/cpp_headers/dma.o 00:04:58.055 CXX test/cpp_headers/endian.o 00:04:58.055 LINK env_dpdk_post_init 00:04:58.055 CXX test/cpp_headers/env_dpdk.o 00:04:58.055 LINK zipf 00:04:58.055 CXX test/cpp_headers/env.o 00:04:58.055 CXX test/cpp_headers/event.o 00:04:58.055 CXX test/cpp_headers/fd_group.o 00:04:58.055 LINK nvmf_tgt 00:04:58.055 CXX test/cpp_headers/fd.o 00:04:58.055 CXX test/cpp_headers/file.o 00:04:58.055 LINK stub 00:04:58.055 CXX test/cpp_headers/fsdev.o 00:04:58.322 CXX test/cpp_headers/fsdev_module.o 00:04:58.322 LINK iscsi_tgt 00:04:58.322 CXX test/cpp_headers/ftl.o 00:04:58.322 CXX test/cpp_headers/fuse_dispatcher.o 00:04:58.322 LINK spdk_trace_record 00:04:58.322 CXX test/cpp_headers/gpt_spec.o 00:04:58.322 LINK bdev_svc 00:04:58.322 LINK verify 00:04:58.322 CXX test/cpp_headers/hexlify.o 00:04:58.322 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:58.322 LINK ioat_perf 00:04:58.322 CXX test/cpp_headers/histogram_data.o 00:04:58.322 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:58.322 LINK spdk_tgt 00:04:58.322 CXX test/cpp_headers/idxd.o 00:04:58.322 CXX test/cpp_headers/idxd_spec.o 00:04:58.322 LINK mem_callbacks 00:04:58.322 CXX test/cpp_headers/init.o 00:04:58.322 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:58.322 CXX test/cpp_headers/ioat.o 00:04:58.322 LINK spdk_dd 00:04:58.322 CXX test/cpp_headers/ioat_spec.o 00:04:58.586 CXX test/cpp_headers/iscsi_spec.o 00:04:58.586 CXX test/cpp_headers/json.o 00:04:58.586 CXX test/cpp_headers/jsonrpc.o 00:04:58.586 CXX test/cpp_headers/keyring.o 00:04:58.586 CXX test/cpp_headers/keyring_module.o 00:04:58.586 CXX test/cpp_headers/likely.o 00:04:58.586 CXX test/cpp_headers/log.o 00:04:58.586 CXX test/cpp_headers/lvol.o 00:04:58.586 CXX test/cpp_headers/md5.o 00:04:58.586 CXX test/cpp_headers/memory.o 00:04:58.586 CXX test/cpp_headers/mmio.o 00:04:58.586 CXX test/cpp_headers/nbd.o 00:04:58.586 CXX test/cpp_headers/net.o 00:04:58.586 CXX test/cpp_headers/notify.o 00:04:58.586 CXX test/cpp_headers/nvme.o 00:04:58.586 LINK spdk_trace 00:04:58.586 LINK pci_ut 00:04:58.586 CXX test/cpp_headers/nvme_intel.o 00:04:58.586 CXX test/cpp_headers/nvme_ocssd.o 00:04:58.586 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:58.586 CXX test/cpp_headers/nvme_spec.o 00:04:58.586 CXX test/cpp_headers/nvme_zns.o 00:04:58.586 CXX test/cpp_headers/nvmf_cmd.o 00:04:58.586 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:58.586 CXX test/cpp_headers/nvmf.o 00:04:58.586 CXX test/cpp_headers/nvmf_spec.o 00:04:58.849 CXX test/cpp_headers/nvmf_transport.o 00:04:58.849 CC examples/sock/hello_world/hello_sock.o 00:04:58.849 CXX test/cpp_headers/opal.o 00:04:58.849 CC test/event/event_perf/event_perf.o 00:04:58.849 CC test/event/reactor_perf/reactor_perf.o 00:04:58.849 CC test/event/reactor/reactor.o 00:04:58.849 CXX test/cpp_headers/opal_spec.o 00:04:58.849 LINK nvme_fuzz 00:04:58.849 CC examples/vmd/lsvmd/lsvmd.o 00:04:58.849 CC examples/idxd/perf/perf.o 00:04:58.849 CC examples/thread/thread/thread_ex.o 00:04:58.849 LINK test_dma 00:04:58.849 CC test/event/app_repeat/app_repeat.o 00:04:58.849 CXX test/cpp_headers/pci_ids.o 00:04:58.849 CXX test/cpp_headers/pipe.o 00:04:58.849 CC examples/vmd/led/led.o 00:04:58.849 CXX test/cpp_headers/queue.o 00:04:59.110 CXX test/cpp_headers/reduce.o 00:04:59.110 CXX test/cpp_headers/rpc.o 00:04:59.110 CXX test/cpp_headers/scheduler.o 00:04:59.110 CXX test/cpp_headers/scsi.o 00:04:59.110 CXX test/cpp_headers/scsi_spec.o 00:04:59.110 CXX test/cpp_headers/sock.o 00:04:59.110 CXX test/cpp_headers/stdinc.o 00:04:59.110 CXX test/cpp_headers/string.o 00:04:59.110 CXX test/cpp_headers/thread.o 00:04:59.110 CXX test/cpp_headers/trace.o 00:04:59.110 CXX test/cpp_headers/trace_parser.o 00:04:59.110 LINK spdk_bdev 00:04:59.110 CXX test/cpp_headers/tree.o 00:04:59.110 CXX test/cpp_headers/ublk.o 00:04:59.111 CXX test/cpp_headers/util.o 00:04:59.111 CC test/event/scheduler/scheduler.o 00:04:59.111 CXX test/cpp_headers/uuid.o 00:04:59.111 CXX test/cpp_headers/version.o 00:04:59.111 CXX test/cpp_headers/vfio_user_pci.o 00:04:59.111 LINK reactor_perf 00:04:59.111 CXX test/cpp_headers/vfio_user_spec.o 00:04:59.111 LINK event_perf 00:04:59.111 LINK spdk_nvme 00:04:59.111 CXX test/cpp_headers/vhost.o 00:04:59.111 CXX test/cpp_headers/vmd.o 00:04:59.111 CXX test/cpp_headers/xor.o 00:04:59.111 LINK reactor 00:04:59.111 CXX test/cpp_headers/zipf.o 00:04:59.111 LINK lsvmd 00:04:59.111 LINK spdk_nvme_perf 00:04:59.111 CC app/vhost/vhost.o 00:04:59.111 LINK app_repeat 00:04:59.111 LINK vhost_fuzz 00:04:59.371 LINK spdk_nvme_identify 00:04:59.371 LINK hello_sock 00:04:59.371 LINK led 00:04:59.371 LINK thread 00:04:59.371 LINK memory_ut 00:04:59.371 LINK spdk_top 00:04:59.631 CC test/nvme/reset/reset.o 00:04:59.631 CC test/nvme/simple_copy/simple_copy.o 00:04:59.631 LINK idxd_perf 00:04:59.631 CC test/nvme/connect_stress/connect_stress.o 00:04:59.631 CC test/nvme/sgl/sgl.o 00:04:59.631 CC test/nvme/startup/startup.o 00:04:59.631 CC test/nvme/aer/aer.o 00:04:59.631 CC test/nvme/err_injection/err_injection.o 00:04:59.631 CC test/nvme/compliance/nvme_compliance.o 00:04:59.631 CC test/nvme/overhead/overhead.o 00:04:59.631 CC test/nvme/e2edp/nvme_dp.o 00:04:59.631 CC test/nvme/reserve/reserve.o 00:04:59.631 CC test/nvme/boot_partition/boot_partition.o 00:04:59.631 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:59.631 CC test/nvme/cuse/cuse.o 00:04:59.631 CC test/nvme/fdp/fdp.o 00:04:59.631 CC test/nvme/fused_ordering/fused_ordering.o 00:04:59.631 LINK scheduler 00:04:59.631 CC test/accel/dif/dif.o 00:04:59.631 LINK vhost 00:04:59.631 CC test/blobfs/mkfs/mkfs.o 00:04:59.631 CC test/lvol/esnap/esnap.o 00:04:59.631 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:59.631 CC examples/nvme/reconnect/reconnect.o 00:04:59.631 CC examples/nvme/hello_world/hello_world.o 00:04:59.631 CC examples/nvme/hotplug/hotplug.o 00:04:59.631 CC examples/nvme/arbitration/arbitration.o 00:04:59.631 CC examples/nvme/abort/abort.o 00:04:59.631 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:59.631 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:59.891 LINK startup 00:04:59.891 LINK boot_partition 00:04:59.891 LINK err_injection 00:04:59.891 LINK connect_stress 00:04:59.891 LINK reserve 00:04:59.891 LINK doorbell_aers 00:04:59.891 LINK mkfs 00:04:59.891 LINK sgl 00:04:59.891 LINK aer 00:04:59.891 CC examples/accel/perf/accel_perf.o 00:04:59.891 LINK fused_ordering 00:04:59.891 CC examples/blob/cli/blobcli.o 00:04:59.891 LINK nvme_compliance 00:04:59.891 LINK overhead 00:04:59.891 CC examples/blob/hello_world/hello_blob.o 00:04:59.891 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:59.891 LINK simple_copy 00:04:59.891 LINK fdp 00:04:59.891 LINK reset 00:04:59.891 LINK cmb_copy 00:05:00.150 LINK nvme_dp 00:05:00.150 LINK hotplug 00:05:00.150 LINK pmr_persistence 00:05:00.150 LINK hello_world 00:05:00.150 LINK reconnect 00:05:00.150 LINK arbitration 00:05:00.150 LINK abort 00:05:00.409 LINK hello_blob 00:05:00.409 LINK nvme_manage 00:05:00.409 LINK dif 00:05:00.409 LINK hello_fsdev 00:05:00.667 LINK blobcli 00:05:00.667 LINK accel_perf 00:05:00.667 CC test/bdev/bdevio/bdevio.o 00:05:00.667 LINK iscsi_fuzz 00:05:00.927 CC examples/bdev/hello_world/hello_bdev.o 00:05:00.927 CC examples/bdev/bdevperf/bdevperf.o 00:05:01.185 LINK bdevio 00:05:01.185 LINK cuse 00:05:01.185 LINK hello_bdev 00:05:01.753 LINK bdevperf 00:05:02.012 CC examples/nvmf/nvmf/nvmf.o 00:05:02.579 LINK nvmf 00:05:05.112 LINK esnap 00:05:05.112 00:05:05.112 real 1m7.584s 00:05:05.112 user 9m3.241s 00:05:05.112 sys 1m57.333s 00:05:05.112 20:06:16 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:05.112 20:06:16 make -- common/autotest_common.sh@10 -- $ set +x 00:05:05.112 ************************************ 00:05:05.112 END TEST make 00:05:05.112 ************************************ 00:05:05.112 20:06:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:05.112 20:06:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:05.112 20:06:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:05.112 20:06:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.112 20:06:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:05.112 20:06:16 -- pm/common@44 -- $ pid=6097 00:05:05.112 20:06:16 -- pm/common@50 -- $ kill -TERM 6097 00:05:05.112 20:06:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.112 20:06:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:05.112 20:06:16 -- pm/common@44 -- $ pid=6099 00:05:05.112 20:06:16 -- pm/common@50 -- $ kill -TERM 6099 00:05:05.112 20:06:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.112 20:06:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:05.112 20:06:16 -- pm/common@44 -- $ pid=6101 00:05:05.112 20:06:16 -- pm/common@50 -- $ kill -TERM 6101 00:05:05.112 20:06:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.112 20:06:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:05.112 20:06:16 -- pm/common@44 -- $ pid=6132 00:05:05.112 20:06:16 -- pm/common@50 -- $ sudo -E kill -TERM 6132 00:05:05.112 20:06:17 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:05.112 20:06:17 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:05.112 20:06:17 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.112 20:06:17 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.112 20:06:17 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.371 20:06:17 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.372 20:06:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.372 20:06:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.372 20:06:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.372 20:06:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.372 20:06:17 -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.372 20:06:17 -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.372 20:06:17 -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.372 20:06:17 -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.372 20:06:17 -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.372 20:06:17 -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.372 20:06:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.372 20:06:17 -- scripts/common.sh@344 -- # case "$op" in 00:05:05.372 20:06:17 -- scripts/common.sh@345 -- # : 1 00:05:05.372 20:06:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.372 20:06:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.372 20:06:17 -- scripts/common.sh@365 -- # decimal 1 00:05:05.372 20:06:17 -- scripts/common.sh@353 -- # local d=1 00:05:05.372 20:06:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.372 20:06:17 -- scripts/common.sh@355 -- # echo 1 00:05:05.372 20:06:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.372 20:06:17 -- scripts/common.sh@366 -- # decimal 2 00:05:05.372 20:06:17 -- scripts/common.sh@353 -- # local d=2 00:05:05.372 20:06:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.372 20:06:17 -- scripts/common.sh@355 -- # echo 2 00:05:05.372 20:06:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.372 20:06:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.372 20:06:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.372 20:06:17 -- scripts/common.sh@368 -- # return 0 00:05:05.372 20:06:17 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.372 20:06:17 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.372 --rc genhtml_branch_coverage=1 00:05:05.372 --rc genhtml_function_coverage=1 00:05:05.372 --rc genhtml_legend=1 00:05:05.372 --rc geninfo_all_blocks=1 00:05:05.372 --rc geninfo_unexecuted_blocks=1 00:05:05.372 00:05:05.372 ' 00:05:05.372 20:06:17 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.372 --rc genhtml_branch_coverage=1 00:05:05.372 --rc genhtml_function_coverage=1 00:05:05.372 --rc genhtml_legend=1 00:05:05.372 --rc geninfo_all_blocks=1 00:05:05.372 --rc geninfo_unexecuted_blocks=1 00:05:05.372 00:05:05.372 ' 00:05:05.372 20:06:17 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.372 --rc genhtml_branch_coverage=1 00:05:05.372 --rc genhtml_function_coverage=1 00:05:05.372 --rc genhtml_legend=1 00:05:05.372 --rc geninfo_all_blocks=1 00:05:05.372 --rc geninfo_unexecuted_blocks=1 00:05:05.372 00:05:05.372 ' 00:05:05.372 20:06:17 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.372 --rc genhtml_branch_coverage=1 00:05:05.372 --rc genhtml_function_coverage=1 00:05:05.372 --rc genhtml_legend=1 00:05:05.372 --rc geninfo_all_blocks=1 00:05:05.372 --rc geninfo_unexecuted_blocks=1 00:05:05.372 00:05:05.372 ' 00:05:05.372 20:06:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:05.372 20:06:17 -- nvmf/common.sh@7 -- # uname -s 00:05:05.372 20:06:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.372 20:06:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.372 20:06:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.372 20:06:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.372 20:06:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.372 20:06:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.372 20:06:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.372 20:06:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.372 20:06:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.372 20:06:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.372 20:06:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:05.372 20:06:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:05.372 20:06:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.372 20:06:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.372 20:06:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:05.372 20:06:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.372 20:06:17 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:05.372 20:06:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:05.372 20:06:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.372 20:06:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.372 20:06:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.372 20:06:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.372 20:06:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.372 20:06:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.372 20:06:17 -- paths/export.sh@5 -- # export PATH 00:05:05.372 20:06:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.372 20:06:17 -- nvmf/common.sh@51 -- # : 0 00:05:05.372 20:06:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:05.372 20:06:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:05.372 20:06:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.372 20:06:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.372 20:06:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.372 20:06:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:05.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:05.372 20:06:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:05.372 20:06:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:05.372 20:06:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:05.372 20:06:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:05.372 20:06:17 -- spdk/autotest.sh@32 -- # uname -s 00:05:05.372 20:06:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:05.372 20:06:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:05.372 20:06:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:05.372 20:06:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:05.372 20:06:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:05.372 20:06:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:05.372 20:06:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:05.372 20:06:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:05.372 20:06:17 -- spdk/autotest.sh@48 -- # udevadm_pid=86894 00:05:05.372 20:06:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:05.372 20:06:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:05.372 20:06:17 -- pm/common@17 -- # local monitor 00:05:05.372 20:06:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.372 20:06:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.372 20:06:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.372 20:06:17 -- pm/common@21 -- # date +%s 00:05:05.372 20:06:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.372 20:06:17 -- pm/common@21 -- # date +%s 00:05:05.372 20:06:17 -- pm/common@25 -- # sleep 1 00:05:05.372 20:06:17 -- pm/common@21 -- # date +%s 00:05:05.372 20:06:17 -- pm/common@21 -- # date +%s 00:05:05.372 20:06:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731956777 00:05:05.372 20:06:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731956777 00:05:05.372 20:06:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731956777 00:05:05.372 20:06:17 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731956777 00:05:05.372 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731956777_collect-cpu-load.pm.log 00:05:05.372 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731956777_collect-vmstat.pm.log 00:05:05.372 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731956777_collect-cpu-temp.pm.log 00:05:05.372 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731956777_collect-bmc-pm.bmc.pm.log 00:05:06.325 20:06:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:06.325 20:06:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:06.325 20:06:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.325 20:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:06.325 20:06:18 -- spdk/autotest.sh@59 -- # create_test_list 00:05:06.325 20:06:18 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:06.325 20:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:06.325 20:06:18 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:06.325 20:06:18 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:06.325 20:06:18 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:06.325 20:06:18 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:06.325 20:06:18 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:06.325 20:06:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:06.325 20:06:18 -- common/autotest_common.sh@1457 -- # uname 00:05:06.325 20:06:18 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:06.325 20:06:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:06.325 20:06:18 -- common/autotest_common.sh@1477 -- # uname 00:05:06.325 20:06:18 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:06.325 20:06:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:06.325 20:06:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:06.584 lcov: LCOV version 1.15 00:05:06.584 20:06:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:38.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:38.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:43.997 20:06:55 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:43.997 20:06:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.997 20:06:55 -- common/autotest_common.sh@10 -- # set +x 00:05:43.997 20:06:55 -- spdk/autotest.sh@78 -- # rm -f 00:05:43.997 20:06:55 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:44.568 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:44.568 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:44.568 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:44.568 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:44.568 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:44.568 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:44.568 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:44.568 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:44.568 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:44.568 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:44.568 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:44.568 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:44.568 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:44.568 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:44.568 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:44.829 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:44.829 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:44.829 20:06:56 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:44.829 20:06:56 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:44.829 20:06:56 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:44.829 20:06:56 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:44.829 20:06:56 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:44.829 20:06:56 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:44.829 20:06:56 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:44.829 20:06:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:44.829 20:06:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:44.829 20:06:56 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:44.829 20:06:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:44.829 20:06:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:44.829 20:06:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:44.829 20:06:56 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:44.829 20:06:56 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:44.829 No valid GPT data, bailing 00:05:44.829 20:06:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:44.829 20:06:56 -- scripts/common.sh@394 -- # pt= 00:05:44.829 20:06:56 -- scripts/common.sh@395 -- # return 1 00:05:44.829 20:06:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:44.829 1+0 records in 00:05:44.829 1+0 records out 00:05:44.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00155404 s, 675 MB/s 00:05:44.829 20:06:56 -- spdk/autotest.sh@105 -- # sync 00:05:44.829 20:06:56 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:44.829 20:06:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:44.829 20:06:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:46.740 20:06:58 -- spdk/autotest.sh@111 -- # uname -s 00:05:46.740 20:06:58 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:46.740 20:06:58 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:46.740 20:06:58 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:48.124 Hugepages 00:05:48.124 node hugesize free / total 00:05:48.124 node0 1048576kB 0 / 0 00:05:48.124 node0 2048kB 0 / 0 00:05:48.124 node1 1048576kB 0 / 0 00:05:48.124 node1 2048kB 0 / 0 00:05:48.124 00:05:48.124 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:48.124 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:48.124 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:48.124 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:48.124 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:48.124 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:48.124 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:48.124 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:48.124 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:48.124 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:48.124 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:48.124 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:48.125 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:48.125 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:48.125 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:48.125 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:48.125 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:48.125 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:48.125 20:06:59 -- spdk/autotest.sh@117 -- # uname -s 00:05:48.125 20:06:59 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:48.125 20:06:59 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:48.125 20:06:59 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:49.511 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:49.511 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:49.511 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:49.511 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:49.511 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:49.511 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:49.511 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:49.511 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:49.511 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:49.511 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:49.511 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:49.511 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:49.511 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:49.511 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:49.511 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:49.511 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:50.456 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:50.456 20:07:02 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:51.394 20:07:03 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:51.394 20:07:03 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:51.394 20:07:03 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:51.394 20:07:03 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:51.394 20:07:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:51.394 20:07:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:51.394 20:07:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:51.394 20:07:03 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:51.394 20:07:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:51.654 20:07:03 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:51.654 20:07:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:51.654 20:07:03 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:53.040 Waiting for block devices as requested 00:05:53.040 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:53.040 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:53.040 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:53.040 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:53.040 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:53.299 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:53.299 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:53.299 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:53.299 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:53.560 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:53.560 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:53.560 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:53.560 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:53.821 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:53.821 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:53.821 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:54.081 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:54.081 20:07:05 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:54.081 20:07:05 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:54.081 20:07:05 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:54.081 20:07:05 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:54.081 20:07:05 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:54.081 20:07:05 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:54.081 20:07:05 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:54.081 20:07:05 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:54.081 20:07:05 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:54.081 20:07:05 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:54.081 20:07:05 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:54.081 20:07:05 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:54.081 20:07:05 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:54.081 20:07:05 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:54.081 20:07:05 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:54.081 20:07:05 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:54.081 20:07:05 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:54.081 20:07:05 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:54.081 20:07:05 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:54.081 20:07:05 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:54.081 20:07:05 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:54.081 20:07:05 -- common/autotest_common.sh@1543 -- # continue 00:05:54.081 20:07:05 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:54.081 20:07:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.081 20:07:05 -- common/autotest_common.sh@10 -- # set +x 00:05:54.081 20:07:06 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:54.081 20:07:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:54.081 20:07:06 -- common/autotest_common.sh@10 -- # set +x 00:05:54.081 20:07:06 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:55.466 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:55.466 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:55.466 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:55.466 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:55.466 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:55.466 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:55.466 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:55.466 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:55.466 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:55.466 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:55.466 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:55.466 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:55.467 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:55.467 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:55.467 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:55.467 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:56.409 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:56.409 20:07:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:56.409 20:07:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.409 20:07:08 -- common/autotest_common.sh@10 -- # set +x 00:05:56.409 20:07:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:56.409 20:07:08 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:56.409 20:07:08 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:56.409 20:07:08 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:56.409 20:07:08 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:56.409 20:07:08 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:56.409 20:07:08 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:56.409 20:07:08 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:56.409 20:07:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:56.409 20:07:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:56.409 20:07:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:56.409 20:07:08 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:56.409 20:07:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:56.668 20:07:08 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:56.668 20:07:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:56.668 20:07:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:56.668 20:07:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:56.668 20:07:08 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:56.668 20:07:08 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:56.668 20:07:08 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:56.668 20:07:08 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:56.668 20:07:08 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:05:56.668 20:07:08 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:05:56.668 20:07:08 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=97441 00:05:56.668 20:07:08 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.668 20:07:08 -- common/autotest_common.sh@1585 -- # waitforlisten 97441 00:05:56.668 20:07:08 -- common/autotest_common.sh@835 -- # '[' -z 97441 ']' 00:05:56.668 20:07:08 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.668 20:07:08 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.668 20:07:08 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.668 20:07:08 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.668 20:07:08 -- common/autotest_common.sh@10 -- # set +x 00:05:56.668 [2024-11-18 20:07:08.526289] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:56.668 [2024-11-18 20:07:08.526376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97441 ] 00:05:56.668 [2024-11-18 20:07:08.595077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.668 [2024-11-18 20:07:08.638482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.927 20:07:08 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.927 20:07:08 -- common/autotest_common.sh@868 -- # return 0 00:05:56.927 20:07:08 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:56.927 20:07:08 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:56.927 20:07:08 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:06:00.212 nvme0n1 00:06:00.212 20:07:11 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:00.471 [2024-11-18 20:07:12.239675] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:00.471 [2024-11-18 20:07:12.239716] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:00.471 request: 00:06:00.471 { 00:06:00.471 "nvme_ctrlr_name": "nvme0", 00:06:00.471 "password": "test", 00:06:00.471 "method": "bdev_nvme_opal_revert", 00:06:00.471 "req_id": 1 00:06:00.471 } 00:06:00.471 Got JSON-RPC error response 00:06:00.471 response: 00:06:00.471 { 00:06:00.471 "code": -32603, 00:06:00.471 "message": "Internal error" 00:06:00.471 } 00:06:00.471 20:07:12 -- common/autotest_common.sh@1591 -- # true 00:06:00.471 20:07:12 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:06:00.471 20:07:12 -- common/autotest_common.sh@1595 -- # killprocess 97441 00:06:00.471 20:07:12 -- common/autotest_common.sh@954 -- # '[' -z 97441 ']' 00:06:00.471 20:07:12 -- common/autotest_common.sh@958 -- # kill -0 97441 00:06:00.471 20:07:12 -- common/autotest_common.sh@959 -- # uname 00:06:00.471 20:07:12 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.471 20:07:12 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97441 00:06:00.471 20:07:12 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.471 20:07:12 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.471 20:07:12 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97441' 00:06:00.471 killing process with pid 97441 00:06:00.471 20:07:12 -- common/autotest_common.sh@973 -- # kill 97441 00:06:00.471 20:07:12 -- common/autotest_common.sh@978 -- # wait 97441 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.471 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.472 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.473 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.473 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.473 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.473 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.473 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.473 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.473 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.473 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.473 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:02.375 20:07:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:02.375 20:07:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:02.375 20:07:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:02.375 20:07:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:02.375 20:07:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:02.375 20:07:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:02.375 20:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:02.375 20:07:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:02.375 20:07:14 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:02.375 20:07:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.375 20:07:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.375 20:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:02.375 ************************************ 00:06:02.375 START TEST env 00:06:02.375 ************************************ 00:06:02.375 20:07:14 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:02.375 * Looking for test storage... 00:06:02.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:02.375 20:07:14 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.375 20:07:14 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.375 20:07:14 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.375 20:07:14 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.375 20:07:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.375 20:07:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.375 20:07:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.375 20:07:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.375 20:07:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.375 20:07:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.375 20:07:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.375 20:07:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.375 20:07:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.375 20:07:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.375 20:07:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.375 20:07:14 env -- scripts/common.sh@344 -- # case "$op" in 00:06:02.375 20:07:14 env -- scripts/common.sh@345 -- # : 1 00:06:02.375 20:07:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.375 20:07:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.375 20:07:14 env -- scripts/common.sh@365 -- # decimal 1 00:06:02.375 20:07:14 env -- scripts/common.sh@353 -- # local d=1 00:06:02.375 20:07:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.375 20:07:14 env -- scripts/common.sh@355 -- # echo 1 00:06:02.375 20:07:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.375 20:07:14 env -- scripts/common.sh@366 -- # decimal 2 00:06:02.375 20:07:14 env -- scripts/common.sh@353 -- # local d=2 00:06:02.375 20:07:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.375 20:07:14 env -- scripts/common.sh@355 -- # echo 2 00:06:02.375 20:07:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.375 20:07:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.375 20:07:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.375 20:07:14 env -- scripts/common.sh@368 -- # return 0 00:06:02.375 20:07:14 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.375 20:07:14 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.375 --rc genhtml_branch_coverage=1 00:06:02.376 --rc genhtml_function_coverage=1 00:06:02.376 --rc genhtml_legend=1 00:06:02.376 --rc geninfo_all_blocks=1 00:06:02.376 --rc geninfo_unexecuted_blocks=1 00:06:02.376 00:06:02.376 ' 00:06:02.376 20:07:14 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.376 --rc genhtml_branch_coverage=1 00:06:02.376 --rc genhtml_function_coverage=1 00:06:02.376 --rc genhtml_legend=1 00:06:02.376 --rc geninfo_all_blocks=1 00:06:02.376 --rc geninfo_unexecuted_blocks=1 00:06:02.376 00:06:02.376 ' 00:06:02.376 20:07:14 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.376 --rc genhtml_branch_coverage=1 00:06:02.376 --rc genhtml_function_coverage=1 00:06:02.376 --rc genhtml_legend=1 00:06:02.376 --rc geninfo_all_blocks=1 00:06:02.376 --rc geninfo_unexecuted_blocks=1 00:06:02.376 00:06:02.376 ' 00:06:02.376 20:07:14 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.376 --rc genhtml_branch_coverage=1 00:06:02.376 --rc genhtml_function_coverage=1 00:06:02.376 --rc genhtml_legend=1 00:06:02.376 --rc geninfo_all_blocks=1 00:06:02.376 --rc geninfo_unexecuted_blocks=1 00:06:02.376 00:06:02.376 ' 00:06:02.376 20:07:14 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:02.376 20:07:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.376 20:07:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.376 20:07:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.376 ************************************ 00:06:02.376 START TEST env_memory 00:06:02.376 ************************************ 00:06:02.376 20:07:14 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:02.376 00:06:02.376 00:06:02.376 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.376 http://cunit.sourceforge.net/ 00:06:02.376 00:06:02.376 00:06:02.376 Suite: memory 00:06:02.376 Test: alloc and free memory map ...[2024-11-18 20:07:14.273942] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:02.376 passed 00:06:02.376 Test: mem map translation ...[2024-11-18 20:07:14.293569] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:02.376 [2024-11-18 20:07:14.293589] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:02.376 [2024-11-18 20:07:14.293675] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:02.376 [2024-11-18 20:07:14.293689] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:02.376 passed 00:06:02.376 Test: mem map registration ...[2024-11-18 20:07:14.334996] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:02.376 [2024-11-18 20:07:14.335015] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:02.376 passed 00:06:02.635 Test: mem map adjacent registrations ...passed 00:06:02.635 00:06:02.635 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.635 suites 1 1 n/a 0 0 00:06:02.635 tests 4 4 4 0 0 00:06:02.635 asserts 152 152 152 0 n/a 00:06:02.635 00:06:02.635 Elapsed time = 0.141 seconds 00:06:02.635 00:06:02.635 real 0m0.150s 00:06:02.635 user 0m0.140s 00:06:02.635 sys 0m0.009s 00:06:02.635 20:07:14 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.635 20:07:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:02.635 ************************************ 00:06:02.635 END TEST env_memory 00:06:02.635 ************************************ 00:06:02.635 20:07:14 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:02.635 20:07:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.635 20:07:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.635 20:07:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.635 ************************************ 00:06:02.635 START TEST env_vtophys 00:06:02.635 ************************************ 00:06:02.635 20:07:14 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:02.635 EAL: lib.eal log level changed from notice to debug 00:06:02.635 EAL: Detected lcore 0 as core 0 on socket 0 00:06:02.635 EAL: Detected lcore 1 as core 1 on socket 0 00:06:02.635 EAL: Detected lcore 2 as core 2 on socket 0 00:06:02.635 EAL: Detected lcore 3 as core 3 on socket 0 00:06:02.635 EAL: Detected lcore 4 as core 4 on socket 0 00:06:02.635 EAL: Detected lcore 5 as core 5 on socket 0 00:06:02.635 EAL: Detected lcore 6 as core 8 on socket 0 00:06:02.635 EAL: Detected lcore 7 as core 9 on socket 0 00:06:02.635 EAL: Detected lcore 8 as core 10 on socket 0 00:06:02.635 EAL: Detected lcore 9 as core 11 on socket 0 00:06:02.635 EAL: Detected lcore 10 as core 12 on socket 0 00:06:02.635 EAL: Detected lcore 11 as core 13 on socket 0 00:06:02.635 EAL: Detected lcore 12 as core 0 on socket 1 00:06:02.635 EAL: Detected lcore 13 as core 1 on socket 1 00:06:02.635 EAL: Detected lcore 14 as core 2 on socket 1 00:06:02.635 EAL: Detected lcore 15 as core 3 on socket 1 00:06:02.635 EAL: Detected lcore 16 as core 4 on socket 1 00:06:02.635 EAL: Detected lcore 17 as core 5 on socket 1 00:06:02.635 EAL: Detected lcore 18 as core 8 on socket 1 00:06:02.635 EAL: Detected lcore 19 as core 9 on socket 1 00:06:02.635 EAL: Detected lcore 20 as core 10 on socket 1 00:06:02.635 EAL: Detected lcore 21 as core 11 on socket 1 00:06:02.635 EAL: Detected lcore 22 as core 12 on socket 1 00:06:02.635 EAL: Detected lcore 23 as core 13 on socket 1 00:06:02.635 EAL: Detected lcore 24 as core 0 on socket 0 00:06:02.635 EAL: Detected lcore 25 as core 1 on socket 0 00:06:02.635 EAL: Detected lcore 26 as core 2 on socket 0 00:06:02.635 EAL: Detected lcore 27 as core 3 on socket 0 00:06:02.635 EAL: Detected lcore 28 as core 4 on socket 0 00:06:02.635 EAL: Detected lcore 29 as core 5 on socket 0 00:06:02.635 EAL: Detected lcore 30 as core 8 on socket 0 00:06:02.635 EAL: Detected lcore 31 as core 9 on socket 0 00:06:02.635 EAL: Detected lcore 32 as core 10 on socket 0 00:06:02.635 EAL: Detected lcore 33 as core 11 on socket 0 00:06:02.635 EAL: Detected lcore 34 as core 12 on socket 0 00:06:02.635 EAL: Detected lcore 35 as core 13 on socket 0 00:06:02.635 EAL: Detected lcore 36 as core 0 on socket 1 00:06:02.635 EAL: Detected lcore 37 as core 1 on socket 1 00:06:02.635 EAL: Detected lcore 38 as core 2 on socket 1 00:06:02.635 EAL: Detected lcore 39 as core 3 on socket 1 00:06:02.635 EAL: Detected lcore 40 as core 4 on socket 1 00:06:02.635 EAL: Detected lcore 41 as core 5 on socket 1 00:06:02.635 EAL: Detected lcore 42 as core 8 on socket 1 00:06:02.635 EAL: Detected lcore 43 as core 9 on socket 1 00:06:02.635 EAL: Detected lcore 44 as core 10 on socket 1 00:06:02.635 EAL: Detected lcore 45 as core 11 on socket 1 00:06:02.635 EAL: Detected lcore 46 as core 12 on socket 1 00:06:02.635 EAL: Detected lcore 47 as core 13 on socket 1 00:06:02.635 EAL: Maximum logical cores by configuration: 128 00:06:02.635 EAL: Detected CPU lcores: 48 00:06:02.635 EAL: Detected NUMA nodes: 2 00:06:02.635 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:02.635 EAL: Detected shared linkage of DPDK 00:06:02.635 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:02.635 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:02.635 EAL: Registered [vdev] bus. 00:06:02.635 EAL: bus.vdev log level changed from disabled to notice 00:06:02.635 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:02.635 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:02.635 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:02.635 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:02.635 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:02.635 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:02.635 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:02.635 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:02.635 EAL: No shared files mode enabled, IPC will be disabled 00:06:02.635 EAL: No shared files mode enabled, IPC is disabled 00:06:02.635 EAL: Bus pci wants IOVA as 'DC' 00:06:02.635 EAL: Bus vdev wants IOVA as 'DC' 00:06:02.636 EAL: Buses did not request a specific IOVA mode. 00:06:02.636 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:02.636 EAL: Selected IOVA mode 'VA' 00:06:02.636 EAL: Probing VFIO support... 00:06:02.636 EAL: IOMMU type 1 (Type 1) is supported 00:06:02.636 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:02.636 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:02.636 EAL: VFIO support initialized 00:06:02.636 EAL: Ask a virtual area of 0x2e000 bytes 00:06:02.636 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:02.636 EAL: Setting up physically contiguous memory... 00:06:02.636 EAL: Setting maximum number of open files to 524288 00:06:02.636 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:02.636 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:02.636 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:02.636 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.636 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:02.636 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.636 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.636 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:02.636 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:02.636 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.636 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:02.636 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.636 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.636 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:02.636 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:02.636 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.636 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:02.636 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.636 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.636 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:02.636 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:02.636 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.636 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:02.636 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.636 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.636 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:02.636 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:02.636 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:02.636 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.636 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:02.636 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.636 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.636 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:02.636 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:02.636 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.636 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:02.636 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.636 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.636 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:02.636 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:02.636 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.636 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:02.636 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.636 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.636 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:02.636 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:02.636 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.636 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:02.636 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.636 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.636 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:02.636 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:02.636 EAL: Hugepages will be freed exactly as allocated. 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: TSC frequency is ~2700000 KHz 00:06:02.636 EAL: Main lcore 0 is ready (tid=7f3b7713ba00;cpuset=[0]) 00:06:02.636 EAL: Trying to obtain current memory policy. 00:06:02.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.636 EAL: Restoring previous memory policy: 0 00:06:02.636 EAL: request: mp_malloc_sync 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: Heap on socket 0 was expanded by 2MB 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:02.636 EAL: Mem event callback 'spdk:(nil)' registered 00:06:02.636 00:06:02.636 00:06:02.636 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.636 http://cunit.sourceforge.net/ 00:06:02.636 00:06:02.636 00:06:02.636 Suite: components_suite 00:06:02.636 Test: vtophys_malloc_test ...passed 00:06:02.636 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:02.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.636 EAL: Restoring previous memory policy: 4 00:06:02.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.636 EAL: request: mp_malloc_sync 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: Heap on socket 0 was expanded by 4MB 00:06:02.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.636 EAL: request: mp_malloc_sync 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: Heap on socket 0 was shrunk by 4MB 00:06:02.636 EAL: Trying to obtain current memory policy. 00:06:02.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.636 EAL: Restoring previous memory policy: 4 00:06:02.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.636 EAL: request: mp_malloc_sync 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: Heap on socket 0 was expanded by 6MB 00:06:02.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.636 EAL: request: mp_malloc_sync 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: Heap on socket 0 was shrunk by 6MB 00:06:02.636 EAL: Trying to obtain current memory policy. 00:06:02.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.636 EAL: Restoring previous memory policy: 4 00:06:02.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.636 EAL: request: mp_malloc_sync 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: Heap on socket 0 was expanded by 10MB 00:06:02.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.636 EAL: request: mp_malloc_sync 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: Heap on socket 0 was shrunk by 10MB 00:06:02.636 EAL: Trying to obtain current memory policy. 00:06:02.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.636 EAL: Restoring previous memory policy: 4 00:06:02.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.636 EAL: request: mp_malloc_sync 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: Heap on socket 0 was expanded by 18MB 00:06:02.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.636 EAL: request: mp_malloc_sync 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: Heap on socket 0 was shrunk by 18MB 00:06:02.636 EAL: Trying to obtain current memory policy. 00:06:02.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.636 EAL: Restoring previous memory policy: 4 00:06:02.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.636 EAL: request: mp_malloc_sync 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: Heap on socket 0 was expanded by 34MB 00:06:02.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.636 EAL: request: mp_malloc_sync 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: Heap on socket 0 was shrunk by 34MB 00:06:02.636 EAL: Trying to obtain current memory policy. 00:06:02.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.636 EAL: Restoring previous memory policy: 4 00:06:02.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.636 EAL: request: mp_malloc_sync 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: Heap on socket 0 was expanded by 66MB 00:06:02.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.636 EAL: request: mp_malloc_sync 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: Heap on socket 0 was shrunk by 66MB 00:06:02.636 EAL: Trying to obtain current memory policy. 00:06:02.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.636 EAL: Restoring previous memory policy: 4 00:06:02.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.636 EAL: request: mp_malloc_sync 00:06:02.636 EAL: No shared files mode enabled, IPC is disabled 00:06:02.636 EAL: Heap on socket 0 was expanded by 130MB 00:06:02.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.895 EAL: request: mp_malloc_sync 00:06:02.895 EAL: No shared files mode enabled, IPC is disabled 00:06:02.895 EAL: Heap on socket 0 was shrunk by 130MB 00:06:02.895 EAL: Trying to obtain current memory policy. 00:06:02.895 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.895 EAL: Restoring previous memory policy: 4 00:06:02.895 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.895 EAL: request: mp_malloc_sync 00:06:02.895 EAL: No shared files mode enabled, IPC is disabled 00:06:02.895 EAL: Heap on socket 0 was expanded by 258MB 00:06:02.895 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.895 EAL: request: mp_malloc_sync 00:06:02.895 EAL: No shared files mode enabled, IPC is disabled 00:06:02.895 EAL: Heap on socket 0 was shrunk by 258MB 00:06:02.895 EAL: Trying to obtain current memory policy. 00:06:02.895 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.153 EAL: Restoring previous memory policy: 4 00:06:03.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.153 EAL: request: mp_malloc_sync 00:06:03.153 EAL: No shared files mode enabled, IPC is disabled 00:06:03.153 EAL: Heap on socket 0 was expanded by 514MB 00:06:03.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.412 EAL: request: mp_malloc_sync 00:06:03.412 EAL: No shared files mode enabled, IPC is disabled 00:06:03.412 EAL: Heap on socket 0 was shrunk by 514MB 00:06:03.412 EAL: Trying to obtain current memory policy. 00:06:03.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.670 EAL: Restoring previous memory policy: 4 00:06:03.670 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.670 EAL: request: mp_malloc_sync 00:06:03.670 EAL: No shared files mode enabled, IPC is disabled 00:06:03.670 EAL: Heap on socket 0 was expanded by 1026MB 00:06:03.670 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.929 EAL: request: mp_malloc_sync 00:06:03.929 EAL: No shared files mode enabled, IPC is disabled 00:06:03.929 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:03.929 passed 00:06:03.929 00:06:03.929 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.929 suites 1 1 n/a 0 0 00:06:03.929 tests 2 2 2 0 0 00:06:03.929 asserts 497 497 497 0 n/a 00:06:03.929 00:06:03.929 Elapsed time = 1.328 seconds 00:06:03.929 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.929 EAL: request: mp_malloc_sync 00:06:03.929 EAL: No shared files mode enabled, IPC is disabled 00:06:03.929 EAL: Heap on socket 0 was shrunk by 2MB 00:06:03.929 EAL: No shared files mode enabled, IPC is disabled 00:06:03.929 EAL: No shared files mode enabled, IPC is disabled 00:06:03.929 EAL: No shared files mode enabled, IPC is disabled 00:06:03.929 00:06:03.929 real 0m1.444s 00:06:03.929 user 0m0.839s 00:06:03.929 sys 0m0.570s 00:06:03.929 20:07:15 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.929 20:07:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:03.929 ************************************ 00:06:03.929 END TEST env_vtophys 00:06:03.929 ************************************ 00:06:03.929 20:07:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:03.929 20:07:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.929 20:07:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.929 20:07:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.929 ************************************ 00:06:03.929 START TEST env_pci 00:06:03.929 ************************************ 00:06:03.929 20:07:15 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:04.189 00:06:04.189 00:06:04.189 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.189 http://cunit.sourceforge.net/ 00:06:04.189 00:06:04.189 00:06:04.189 Suite: pci 00:06:04.189 Test: pci_hook ...[2024-11-18 20:07:15.945179] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 98342 has claimed it 00:06:04.189 EAL: Cannot find device (10000:00:01.0) 00:06:04.189 EAL: Failed to attach device on primary process 00:06:04.189 passed 00:06:04.189 00:06:04.189 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.189 suites 1 1 n/a 0 0 00:06:04.189 tests 1 1 1 0 0 00:06:04.189 asserts 25 25 25 0 n/a 00:06:04.189 00:06:04.189 Elapsed time = 0.021 seconds 00:06:04.189 00:06:04.189 real 0m0.033s 00:06:04.189 user 0m0.008s 00:06:04.189 sys 0m0.025s 00:06:04.189 20:07:15 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.189 20:07:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:04.189 ************************************ 00:06:04.189 END TEST env_pci 00:06:04.189 ************************************ 00:06:04.189 20:07:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:04.189 20:07:15 env -- env/env.sh@15 -- # uname 00:06:04.189 20:07:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:04.189 20:07:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:04.189 20:07:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:04.189 20:07:15 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:04.189 20:07:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.189 20:07:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.189 ************************************ 00:06:04.189 START TEST env_dpdk_post_init 00:06:04.189 ************************************ 00:06:04.189 20:07:16 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:04.189 EAL: Detected CPU lcores: 48 00:06:04.189 EAL: Detected NUMA nodes: 2 00:06:04.189 EAL: Detected shared linkage of DPDK 00:06:04.189 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:04.189 EAL: Selected IOVA mode 'VA' 00:06:04.189 EAL: VFIO support initialized 00:06:04.189 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:04.189 EAL: Using IOMMU type 1 (Type 1) 00:06:04.189 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:04.189 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:04.189 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:04.189 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:04.189 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:04.449 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:04.449 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:04.449 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:04.449 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:04.449 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:04.449 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:04.450 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:04.450 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:04.450 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:04.450 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:04.450 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:05.390 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:06:08.673 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:06:08.673 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:06:08.673 Starting DPDK initialization... 00:06:08.673 Starting SPDK post initialization... 00:06:08.673 SPDK NVMe probe 00:06:08.673 Attaching to 0000:88:00.0 00:06:08.673 Attached to 0000:88:00.0 00:06:08.673 Cleaning up... 00:06:08.673 00:06:08.673 real 0m4.412s 00:06:08.673 user 0m3.272s 00:06:08.673 sys 0m0.196s 00:06:08.673 20:07:20 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.674 20:07:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.674 ************************************ 00:06:08.674 END TEST env_dpdk_post_init 00:06:08.674 ************************************ 00:06:08.674 20:07:20 env -- env/env.sh@26 -- # uname 00:06:08.674 20:07:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:08.674 20:07:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:08.674 20:07:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.674 20:07:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.674 20:07:20 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.674 ************************************ 00:06:08.674 START TEST env_mem_callbacks 00:06:08.674 ************************************ 00:06:08.674 20:07:20 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:08.674 EAL: Detected CPU lcores: 48 00:06:08.674 EAL: Detected NUMA nodes: 2 00:06:08.674 EAL: Detected shared linkage of DPDK 00:06:08.674 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:08.674 EAL: Selected IOVA mode 'VA' 00:06:08.674 EAL: VFIO support initialized 00:06:08.674 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:08.674 00:06:08.674 00:06:08.674 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.674 http://cunit.sourceforge.net/ 00:06:08.674 00:06:08.674 00:06:08.674 Suite: memory 00:06:08.674 Test: test ... 00:06:08.674 register 0x200000200000 2097152 00:06:08.674 malloc 3145728 00:06:08.674 register 0x200000400000 4194304 00:06:08.674 buf 0x200000500000 len 3145728 PASSED 00:06:08.674 malloc 64 00:06:08.674 buf 0x2000004fff40 len 64 PASSED 00:06:08.674 malloc 4194304 00:06:08.674 register 0x200000800000 6291456 00:06:08.674 buf 0x200000a00000 len 4194304 PASSED 00:06:08.674 free 0x200000500000 3145728 00:06:08.674 free 0x2000004fff40 64 00:06:08.674 unregister 0x200000400000 4194304 PASSED 00:06:08.674 free 0x200000a00000 4194304 00:06:08.674 unregister 0x200000800000 6291456 PASSED 00:06:08.674 malloc 8388608 00:06:08.674 register 0x200000400000 10485760 00:06:08.674 buf 0x200000600000 len 8388608 PASSED 00:06:08.674 free 0x200000600000 8388608 00:06:08.674 unregister 0x200000400000 10485760 PASSED 00:06:08.674 passed 00:06:08.674 00:06:08.674 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.674 suites 1 1 n/a 0 0 00:06:08.674 tests 1 1 1 0 0 00:06:08.674 asserts 15 15 15 0 n/a 00:06:08.674 00:06:08.674 Elapsed time = 0.005 seconds 00:06:08.674 00:06:08.674 real 0m0.048s 00:06:08.674 user 0m0.014s 00:06:08.674 sys 0m0.034s 00:06:08.674 20:07:20 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.674 20:07:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:08.674 ************************************ 00:06:08.674 END TEST env_mem_callbacks 00:06:08.674 ************************************ 00:06:08.674 00:06:08.674 real 0m6.485s 00:06:08.674 user 0m4.487s 00:06:08.674 sys 0m1.043s 00:06:08.674 20:07:20 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.674 20:07:20 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.674 ************************************ 00:06:08.674 END TEST env 00:06:08.674 ************************************ 00:06:08.674 20:07:20 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:08.674 20:07:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.674 20:07:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.674 20:07:20 -- common/autotest_common.sh@10 -- # set +x 00:06:08.674 ************************************ 00:06:08.674 START TEST rpc 00:06:08.674 ************************************ 00:06:08.674 20:07:20 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:08.674 * Looking for test storage... 00:06:08.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:08.674 20:07:20 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.674 20:07:20 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.674 20:07:20 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.933 20:07:20 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.933 20:07:20 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.933 20:07:20 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.933 20:07:20 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.933 20:07:20 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.933 20:07:20 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.933 20:07:20 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.933 20:07:20 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.933 20:07:20 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.933 20:07:20 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.933 20:07:20 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.933 20:07:20 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.933 20:07:20 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:08.933 20:07:20 rpc -- scripts/common.sh@345 -- # : 1 00:06:08.933 20:07:20 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.933 20:07:20 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.933 20:07:20 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:08.933 20:07:20 rpc -- scripts/common.sh@353 -- # local d=1 00:06:08.933 20:07:20 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.933 20:07:20 rpc -- scripts/common.sh@355 -- # echo 1 00:06:08.933 20:07:20 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.933 20:07:20 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:08.933 20:07:20 rpc -- scripts/common.sh@353 -- # local d=2 00:06:08.933 20:07:20 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.933 20:07:20 rpc -- scripts/common.sh@355 -- # echo 2 00:06:08.933 20:07:20 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.933 20:07:20 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.933 20:07:20 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.933 20:07:20 rpc -- scripts/common.sh@368 -- # return 0 00:06:08.933 20:07:20 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.933 20:07:20 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.933 --rc genhtml_branch_coverage=1 00:06:08.933 --rc genhtml_function_coverage=1 00:06:08.933 --rc genhtml_legend=1 00:06:08.933 --rc geninfo_all_blocks=1 00:06:08.933 --rc geninfo_unexecuted_blocks=1 00:06:08.933 00:06:08.933 ' 00:06:08.933 20:07:20 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.933 --rc genhtml_branch_coverage=1 00:06:08.933 --rc genhtml_function_coverage=1 00:06:08.933 --rc genhtml_legend=1 00:06:08.933 --rc geninfo_all_blocks=1 00:06:08.933 --rc geninfo_unexecuted_blocks=1 00:06:08.933 00:06:08.933 ' 00:06:08.933 20:07:20 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.933 --rc genhtml_branch_coverage=1 00:06:08.933 --rc genhtml_function_coverage=1 00:06:08.933 --rc genhtml_legend=1 00:06:08.933 --rc geninfo_all_blocks=1 00:06:08.933 --rc geninfo_unexecuted_blocks=1 00:06:08.933 00:06:08.933 ' 00:06:08.933 20:07:20 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.933 --rc genhtml_branch_coverage=1 00:06:08.933 --rc genhtml_function_coverage=1 00:06:08.933 --rc genhtml_legend=1 00:06:08.933 --rc geninfo_all_blocks=1 00:06:08.933 --rc geninfo_unexecuted_blocks=1 00:06:08.933 00:06:08.933 ' 00:06:08.933 20:07:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=99129 00:06:08.933 20:07:20 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:08.933 20:07:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.933 20:07:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 99129 00:06:08.933 20:07:20 rpc -- common/autotest_common.sh@835 -- # '[' -z 99129 ']' 00:06:08.933 20:07:20 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.933 20:07:20 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.933 20:07:20 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.933 20:07:20 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.933 20:07:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.933 [2024-11-18 20:07:20.805372] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:08.933 [2024-11-18 20:07:20.805460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99129 ] 00:06:08.933 [2024-11-18 20:07:20.870689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.933 [2024-11-18 20:07:20.914962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:08.933 [2024-11-18 20:07:20.915018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 99129' to capture a snapshot of events at runtime. 00:06:08.933 [2024-11-18 20:07:20.915047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.933 [2024-11-18 20:07:20.915059] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.934 [2024-11-18 20:07:20.915068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid99129 for offline analysis/debug. 00:06:08.934 [2024-11-18 20:07:20.915616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.192 20:07:21 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.193 20:07:21 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:09.193 20:07:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:09.193 20:07:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:09.193 20:07:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:09.193 20:07:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:09.193 20:07:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.193 20:07:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.193 20:07:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.193 ************************************ 00:06:09.193 START TEST rpc_integrity 00:06:09.193 ************************************ 00:06:09.193 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:09.193 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:09.193 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.193 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.453 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.453 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:09.453 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:09.453 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:09.453 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:09.453 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.453 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.453 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.453 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:09.453 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:09.453 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.453 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.453 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.453 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:09.453 { 00:06:09.453 "name": "Malloc0", 00:06:09.453 "aliases": [ 00:06:09.453 "91515916-3487-429c-a86b-562cdfa134c3" 00:06:09.453 ], 00:06:09.453 "product_name": "Malloc disk", 00:06:09.453 "block_size": 512, 00:06:09.453 "num_blocks": 16384, 00:06:09.453 "uuid": "91515916-3487-429c-a86b-562cdfa134c3", 00:06:09.453 "assigned_rate_limits": { 00:06:09.453 "rw_ios_per_sec": 0, 00:06:09.453 "rw_mbytes_per_sec": 0, 00:06:09.453 "r_mbytes_per_sec": 0, 00:06:09.453 "w_mbytes_per_sec": 0 00:06:09.453 }, 00:06:09.453 "claimed": false, 00:06:09.453 "zoned": false, 00:06:09.453 "supported_io_types": { 00:06:09.453 "read": true, 00:06:09.453 "write": true, 00:06:09.453 "unmap": true, 00:06:09.453 "flush": true, 00:06:09.453 "reset": true, 00:06:09.453 "nvme_admin": false, 00:06:09.453 "nvme_io": false, 00:06:09.453 "nvme_io_md": false, 00:06:09.453 "write_zeroes": true, 00:06:09.453 "zcopy": true, 00:06:09.453 "get_zone_info": false, 00:06:09.453 "zone_management": false, 00:06:09.453 "zone_append": false, 00:06:09.453 "compare": false, 00:06:09.453 "compare_and_write": false, 00:06:09.453 "abort": true, 00:06:09.453 "seek_hole": false, 00:06:09.453 "seek_data": false, 00:06:09.453 "copy": true, 00:06:09.453 "nvme_iov_md": false 00:06:09.453 }, 00:06:09.453 "memory_domains": [ 00:06:09.453 { 00:06:09.453 "dma_device_id": "system", 00:06:09.453 "dma_device_type": 1 00:06:09.453 }, 00:06:09.453 { 00:06:09.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.453 "dma_device_type": 2 00:06:09.453 } 00:06:09.453 ], 00:06:09.453 "driver_specific": {} 00:06:09.453 } 00:06:09.453 ]' 00:06:09.453 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:09.453 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:09.453 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:09.453 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.453 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.453 [2024-11-18 20:07:21.304081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:09.453 [2024-11-18 20:07:21.304140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:09.453 [2024-11-18 20:07:21.304164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9048e0 00:06:09.453 [2024-11-18 20:07:21.304177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:09.453 [2024-11-18 20:07:21.305483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:09.453 [2024-11-18 20:07:21.305506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:09.453 Passthru0 00:06:09.453 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.453 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:09.453 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.453 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.453 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.453 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:09.453 { 00:06:09.453 "name": "Malloc0", 00:06:09.453 "aliases": [ 00:06:09.453 "91515916-3487-429c-a86b-562cdfa134c3" 00:06:09.453 ], 00:06:09.453 "product_name": "Malloc disk", 00:06:09.453 "block_size": 512, 00:06:09.453 "num_blocks": 16384, 00:06:09.453 "uuid": "91515916-3487-429c-a86b-562cdfa134c3", 00:06:09.453 "assigned_rate_limits": { 00:06:09.453 "rw_ios_per_sec": 0, 00:06:09.453 "rw_mbytes_per_sec": 0, 00:06:09.453 "r_mbytes_per_sec": 0, 00:06:09.453 "w_mbytes_per_sec": 0 00:06:09.453 }, 00:06:09.453 "claimed": true, 00:06:09.453 "claim_type": "exclusive_write", 00:06:09.453 "zoned": false, 00:06:09.453 "supported_io_types": { 00:06:09.453 "read": true, 00:06:09.453 "write": true, 00:06:09.453 "unmap": true, 00:06:09.453 "flush": true, 00:06:09.453 "reset": true, 00:06:09.453 "nvme_admin": false, 00:06:09.453 "nvme_io": false, 00:06:09.453 "nvme_io_md": false, 00:06:09.453 "write_zeroes": true, 00:06:09.453 "zcopy": true, 00:06:09.453 "get_zone_info": false, 00:06:09.453 "zone_management": false, 00:06:09.453 "zone_append": false, 00:06:09.453 "compare": false, 00:06:09.453 "compare_and_write": false, 00:06:09.453 "abort": true, 00:06:09.453 "seek_hole": false, 00:06:09.453 "seek_data": false, 00:06:09.453 "copy": true, 00:06:09.453 "nvme_iov_md": false 00:06:09.453 }, 00:06:09.453 "memory_domains": [ 00:06:09.453 { 00:06:09.453 "dma_device_id": "system", 00:06:09.453 "dma_device_type": 1 00:06:09.453 }, 00:06:09.453 { 00:06:09.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.453 "dma_device_type": 2 00:06:09.453 } 00:06:09.453 ], 00:06:09.453 "driver_specific": {} 00:06:09.453 }, 00:06:09.453 { 00:06:09.453 "name": "Passthru0", 00:06:09.453 "aliases": [ 00:06:09.453 "e63e6874-cee0-50fd-ab6d-5ee22600ca34" 00:06:09.453 ], 00:06:09.453 "product_name": "passthru", 00:06:09.453 "block_size": 512, 00:06:09.453 "num_blocks": 16384, 00:06:09.453 "uuid": "e63e6874-cee0-50fd-ab6d-5ee22600ca34", 00:06:09.453 "assigned_rate_limits": { 00:06:09.453 "rw_ios_per_sec": 0, 00:06:09.453 "rw_mbytes_per_sec": 0, 00:06:09.453 "r_mbytes_per_sec": 0, 00:06:09.453 "w_mbytes_per_sec": 0 00:06:09.453 }, 00:06:09.453 "claimed": false, 00:06:09.453 "zoned": false, 00:06:09.453 "supported_io_types": { 00:06:09.453 "read": true, 00:06:09.453 "write": true, 00:06:09.453 "unmap": true, 00:06:09.453 "flush": true, 00:06:09.453 "reset": true, 00:06:09.453 "nvme_admin": false, 00:06:09.453 "nvme_io": false, 00:06:09.453 "nvme_io_md": false, 00:06:09.453 "write_zeroes": true, 00:06:09.453 "zcopy": true, 00:06:09.453 "get_zone_info": false, 00:06:09.453 "zone_management": false, 00:06:09.453 "zone_append": false, 00:06:09.453 "compare": false, 00:06:09.453 "compare_and_write": false, 00:06:09.453 "abort": true, 00:06:09.453 "seek_hole": false, 00:06:09.453 "seek_data": false, 00:06:09.453 "copy": true, 00:06:09.453 "nvme_iov_md": false 00:06:09.453 }, 00:06:09.453 "memory_domains": [ 00:06:09.453 { 00:06:09.453 "dma_device_id": "system", 00:06:09.453 "dma_device_type": 1 00:06:09.453 }, 00:06:09.453 { 00:06:09.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.453 "dma_device_type": 2 00:06:09.453 } 00:06:09.453 ], 00:06:09.454 "driver_specific": { 00:06:09.454 "passthru": { 00:06:09.454 "name": "Passthru0", 00:06:09.454 "base_bdev_name": "Malloc0" 00:06:09.454 } 00:06:09.454 } 00:06:09.454 } 00:06:09.454 ]' 00:06:09.454 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:09.454 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:09.454 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:09.454 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.454 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.454 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.454 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:09.454 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.454 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.454 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.454 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:09.454 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.454 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.454 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.454 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:09.454 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:09.454 20:07:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:09.454 00:06:09.454 real 0m0.221s 00:06:09.454 user 0m0.146s 00:06:09.454 sys 0m0.021s 00:06:09.454 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.454 20:07:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.454 ************************************ 00:06:09.454 END TEST rpc_integrity 00:06:09.454 ************************************ 00:06:09.454 20:07:21 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:09.454 20:07:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.454 20:07:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.454 20:07:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.713 ************************************ 00:06:09.713 START TEST rpc_plugins 00:06:09.713 ************************************ 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:09.713 20:07:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.713 20:07:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:09.713 20:07:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.713 20:07:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:09.713 { 00:06:09.713 "name": "Malloc1", 00:06:09.713 "aliases": [ 00:06:09.713 "12f8803d-9dff-435c-a150-03078b8fd0c7" 00:06:09.713 ], 00:06:09.713 "product_name": "Malloc disk", 00:06:09.713 "block_size": 4096, 00:06:09.713 "num_blocks": 256, 00:06:09.713 "uuid": "12f8803d-9dff-435c-a150-03078b8fd0c7", 00:06:09.713 "assigned_rate_limits": { 00:06:09.713 "rw_ios_per_sec": 0, 00:06:09.713 "rw_mbytes_per_sec": 0, 00:06:09.713 "r_mbytes_per_sec": 0, 00:06:09.713 "w_mbytes_per_sec": 0 00:06:09.713 }, 00:06:09.713 "claimed": false, 00:06:09.713 "zoned": false, 00:06:09.713 "supported_io_types": { 00:06:09.713 "read": true, 00:06:09.713 "write": true, 00:06:09.713 "unmap": true, 00:06:09.713 "flush": true, 00:06:09.713 "reset": true, 00:06:09.713 "nvme_admin": false, 00:06:09.713 "nvme_io": false, 00:06:09.713 "nvme_io_md": false, 00:06:09.713 "write_zeroes": true, 00:06:09.713 "zcopy": true, 00:06:09.713 "get_zone_info": false, 00:06:09.713 "zone_management": false, 00:06:09.713 "zone_append": false, 00:06:09.713 "compare": false, 00:06:09.713 "compare_and_write": false, 00:06:09.713 "abort": true, 00:06:09.713 "seek_hole": false, 00:06:09.713 "seek_data": false, 00:06:09.713 "copy": true, 00:06:09.713 "nvme_iov_md": false 00:06:09.713 }, 00:06:09.713 "memory_domains": [ 00:06:09.713 { 00:06:09.713 "dma_device_id": "system", 00:06:09.713 "dma_device_type": 1 00:06:09.713 }, 00:06:09.713 { 00:06:09.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.713 "dma_device_type": 2 00:06:09.713 } 00:06:09.713 ], 00:06:09.713 "driver_specific": {} 00:06:09.713 } 00:06:09.713 ]' 00:06:09.713 20:07:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:09.713 20:07:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:09.713 20:07:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.713 20:07:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.713 20:07:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:09.713 20:07:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:09.713 20:07:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:09.713 00:06:09.713 real 0m0.107s 00:06:09.713 user 0m0.070s 00:06:09.713 sys 0m0.008s 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.713 20:07:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.713 ************************************ 00:06:09.713 END TEST rpc_plugins 00:06:09.713 ************************************ 00:06:09.713 20:07:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:09.713 20:07:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.713 20:07:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.713 20:07:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.713 ************************************ 00:06:09.713 START TEST rpc_trace_cmd_test 00:06:09.713 ************************************ 00:06:09.713 20:07:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:09.713 20:07:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:09.713 20:07:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:09.713 20:07:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.713 20:07:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.713 20:07:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.713 20:07:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:09.713 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid99129", 00:06:09.713 "tpoint_group_mask": "0x8", 00:06:09.713 "iscsi_conn": { 00:06:09.713 "mask": "0x2", 00:06:09.713 "tpoint_mask": "0x0" 00:06:09.713 }, 00:06:09.713 "scsi": { 00:06:09.713 "mask": "0x4", 00:06:09.713 "tpoint_mask": "0x0" 00:06:09.713 }, 00:06:09.713 "bdev": { 00:06:09.713 "mask": "0x8", 00:06:09.713 "tpoint_mask": "0xffffffffffffffff" 00:06:09.713 }, 00:06:09.713 "nvmf_rdma": { 00:06:09.713 "mask": "0x10", 00:06:09.713 "tpoint_mask": "0x0" 00:06:09.713 }, 00:06:09.713 "nvmf_tcp": { 00:06:09.713 "mask": "0x20", 00:06:09.713 "tpoint_mask": "0x0" 00:06:09.713 }, 00:06:09.713 "ftl": { 00:06:09.713 "mask": "0x40", 00:06:09.713 "tpoint_mask": "0x0" 00:06:09.713 }, 00:06:09.713 "blobfs": { 00:06:09.713 "mask": "0x80", 00:06:09.713 "tpoint_mask": "0x0" 00:06:09.713 }, 00:06:09.713 "dsa": { 00:06:09.713 "mask": "0x200", 00:06:09.714 "tpoint_mask": "0x0" 00:06:09.714 }, 00:06:09.714 "thread": { 00:06:09.714 "mask": "0x400", 00:06:09.714 "tpoint_mask": "0x0" 00:06:09.714 }, 00:06:09.714 "nvme_pcie": { 00:06:09.714 "mask": "0x800", 00:06:09.714 "tpoint_mask": "0x0" 00:06:09.714 }, 00:06:09.714 "iaa": { 00:06:09.714 "mask": "0x1000", 00:06:09.714 "tpoint_mask": "0x0" 00:06:09.714 }, 00:06:09.714 "nvme_tcp": { 00:06:09.714 "mask": "0x2000", 00:06:09.714 "tpoint_mask": "0x0" 00:06:09.714 }, 00:06:09.714 "bdev_nvme": { 00:06:09.714 "mask": "0x4000", 00:06:09.714 "tpoint_mask": "0x0" 00:06:09.714 }, 00:06:09.714 "sock": { 00:06:09.714 "mask": "0x8000", 00:06:09.714 "tpoint_mask": "0x0" 00:06:09.714 }, 00:06:09.714 "blob": { 00:06:09.714 "mask": "0x10000", 00:06:09.714 "tpoint_mask": "0x0" 00:06:09.714 }, 00:06:09.714 "bdev_raid": { 00:06:09.714 "mask": "0x20000", 00:06:09.714 "tpoint_mask": "0x0" 00:06:09.714 }, 00:06:09.714 "scheduler": { 00:06:09.714 "mask": "0x40000", 00:06:09.714 "tpoint_mask": "0x0" 00:06:09.714 } 00:06:09.714 }' 00:06:09.714 20:07:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:09.714 20:07:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:09.714 20:07:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:09.714 20:07:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:09.714 20:07:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:09.973 20:07:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:09.973 20:07:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:09.973 20:07:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:09.973 20:07:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:09.973 20:07:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:09.973 00:06:09.973 real 0m0.179s 00:06:09.973 user 0m0.155s 00:06:09.973 sys 0m0.018s 00:06:09.973 20:07:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.973 20:07:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.973 ************************************ 00:06:09.973 END TEST rpc_trace_cmd_test 00:06:09.973 ************************************ 00:06:09.973 20:07:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:09.973 20:07:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:09.973 20:07:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:09.973 20:07:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.973 20:07:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.973 20:07:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.973 ************************************ 00:06:09.973 START TEST rpc_daemon_integrity 00:06:09.973 ************************************ 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:09.973 { 00:06:09.973 "name": "Malloc2", 00:06:09.973 "aliases": [ 00:06:09.973 "50ae7067-ea38-4a1b-ad4c-3ae4d53c8aa4" 00:06:09.973 ], 00:06:09.973 "product_name": "Malloc disk", 00:06:09.973 "block_size": 512, 00:06:09.973 "num_blocks": 16384, 00:06:09.973 "uuid": "50ae7067-ea38-4a1b-ad4c-3ae4d53c8aa4", 00:06:09.973 "assigned_rate_limits": { 00:06:09.973 "rw_ios_per_sec": 0, 00:06:09.973 "rw_mbytes_per_sec": 0, 00:06:09.973 "r_mbytes_per_sec": 0, 00:06:09.973 "w_mbytes_per_sec": 0 00:06:09.973 }, 00:06:09.973 "claimed": false, 00:06:09.973 "zoned": false, 00:06:09.973 "supported_io_types": { 00:06:09.973 "read": true, 00:06:09.973 "write": true, 00:06:09.973 "unmap": true, 00:06:09.973 "flush": true, 00:06:09.973 "reset": true, 00:06:09.973 "nvme_admin": false, 00:06:09.973 "nvme_io": false, 00:06:09.973 "nvme_io_md": false, 00:06:09.973 "write_zeroes": true, 00:06:09.973 "zcopy": true, 00:06:09.973 "get_zone_info": false, 00:06:09.973 "zone_management": false, 00:06:09.973 "zone_append": false, 00:06:09.973 "compare": false, 00:06:09.973 "compare_and_write": false, 00:06:09.973 "abort": true, 00:06:09.973 "seek_hole": false, 00:06:09.973 "seek_data": false, 00:06:09.973 "copy": true, 00:06:09.973 "nvme_iov_md": false 00:06:09.973 }, 00:06:09.973 "memory_domains": [ 00:06:09.973 { 00:06:09.973 "dma_device_id": "system", 00:06:09.973 "dma_device_type": 1 00:06:09.973 }, 00:06:09.973 { 00:06:09.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.973 "dma_device_type": 2 00:06:09.973 } 00:06:09.973 ], 00:06:09.973 "driver_specific": {} 00:06:09.973 } 00:06:09.973 ]' 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.973 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.973 [2024-11-18 20:07:21.938045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:09.973 [2024-11-18 20:07:21.938101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:09.973 [2024-11-18 20:07:21.938124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa346f0 00:06:09.973 [2024-11-18 20:07:21.938137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:09.973 [2024-11-18 20:07:21.939324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:09.974 [2024-11-18 20:07:21.939347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:09.974 Passthru0 00:06:09.974 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.974 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:09.974 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.974 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.974 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.974 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:09.974 { 00:06:09.974 "name": "Malloc2", 00:06:09.974 "aliases": [ 00:06:09.974 "50ae7067-ea38-4a1b-ad4c-3ae4d53c8aa4" 00:06:09.974 ], 00:06:09.974 "product_name": "Malloc disk", 00:06:09.974 "block_size": 512, 00:06:09.974 "num_blocks": 16384, 00:06:09.974 "uuid": "50ae7067-ea38-4a1b-ad4c-3ae4d53c8aa4", 00:06:09.974 "assigned_rate_limits": { 00:06:09.974 "rw_ios_per_sec": 0, 00:06:09.974 "rw_mbytes_per_sec": 0, 00:06:09.974 "r_mbytes_per_sec": 0, 00:06:09.974 "w_mbytes_per_sec": 0 00:06:09.974 }, 00:06:09.974 "claimed": true, 00:06:09.974 "claim_type": "exclusive_write", 00:06:09.974 "zoned": false, 00:06:09.974 "supported_io_types": { 00:06:09.974 "read": true, 00:06:09.974 "write": true, 00:06:09.974 "unmap": true, 00:06:09.974 "flush": true, 00:06:09.974 "reset": true, 00:06:09.974 "nvme_admin": false, 00:06:09.974 "nvme_io": false, 00:06:09.974 "nvme_io_md": false, 00:06:09.974 "write_zeroes": true, 00:06:09.974 "zcopy": true, 00:06:09.974 "get_zone_info": false, 00:06:09.974 "zone_management": false, 00:06:09.974 "zone_append": false, 00:06:09.974 "compare": false, 00:06:09.974 "compare_and_write": false, 00:06:09.974 "abort": true, 00:06:09.974 "seek_hole": false, 00:06:09.974 "seek_data": false, 00:06:09.974 "copy": true, 00:06:09.974 "nvme_iov_md": false 00:06:09.974 }, 00:06:09.974 "memory_domains": [ 00:06:09.974 { 00:06:09.974 "dma_device_id": "system", 00:06:09.974 "dma_device_type": 1 00:06:09.974 }, 00:06:09.974 { 00:06:09.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.974 "dma_device_type": 2 00:06:09.974 } 00:06:09.974 ], 00:06:09.974 "driver_specific": {} 00:06:09.974 }, 00:06:09.974 { 00:06:09.974 "name": "Passthru0", 00:06:09.974 "aliases": [ 00:06:09.974 "53abe36e-8fb9-5639-a612-a4be5c6e87a6" 00:06:09.974 ], 00:06:09.974 "product_name": "passthru", 00:06:09.974 "block_size": 512, 00:06:09.974 "num_blocks": 16384, 00:06:09.974 "uuid": "53abe36e-8fb9-5639-a612-a4be5c6e87a6", 00:06:09.974 "assigned_rate_limits": { 00:06:09.974 "rw_ios_per_sec": 0, 00:06:09.974 "rw_mbytes_per_sec": 0, 00:06:09.974 "r_mbytes_per_sec": 0, 00:06:09.974 "w_mbytes_per_sec": 0 00:06:09.974 }, 00:06:09.974 "claimed": false, 00:06:09.974 "zoned": false, 00:06:09.974 "supported_io_types": { 00:06:09.974 "read": true, 00:06:09.974 "write": true, 00:06:09.974 "unmap": true, 00:06:09.974 "flush": true, 00:06:09.974 "reset": true, 00:06:09.974 "nvme_admin": false, 00:06:09.974 "nvme_io": false, 00:06:09.974 "nvme_io_md": false, 00:06:09.974 "write_zeroes": true, 00:06:09.974 "zcopy": true, 00:06:09.974 "get_zone_info": false, 00:06:09.974 "zone_management": false, 00:06:09.974 "zone_append": false, 00:06:09.974 "compare": false, 00:06:09.974 "compare_and_write": false, 00:06:09.974 "abort": true, 00:06:09.974 "seek_hole": false, 00:06:09.974 "seek_data": false, 00:06:09.974 "copy": true, 00:06:09.974 "nvme_iov_md": false 00:06:09.974 }, 00:06:09.974 "memory_domains": [ 00:06:09.974 { 00:06:09.974 "dma_device_id": "system", 00:06:09.974 "dma_device_type": 1 00:06:09.974 }, 00:06:09.974 { 00:06:09.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.974 "dma_device_type": 2 00:06:09.974 } 00:06:09.974 ], 00:06:09.974 "driver_specific": { 00:06:09.974 "passthru": { 00:06:09.974 "name": "Passthru0", 00:06:09.974 "base_bdev_name": "Malloc2" 00:06:09.974 } 00:06:09.974 } 00:06:09.974 } 00:06:09.974 ]' 00:06:09.974 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:10.233 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:10.233 20:07:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:10.233 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.233 20:07:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.233 20:07:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.233 20:07:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:10.233 20:07:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.233 20:07:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.233 20:07:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.233 20:07:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:10.233 20:07:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.233 20:07:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.233 20:07:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.233 20:07:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:10.233 20:07:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:10.233 20:07:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:10.233 00:06:10.233 real 0m0.218s 00:06:10.233 user 0m0.134s 00:06:10.233 sys 0m0.027s 00:06:10.233 20:07:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.233 20:07:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.233 ************************************ 00:06:10.233 END TEST rpc_daemon_integrity 00:06:10.233 ************************************ 00:06:10.233 20:07:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:10.233 20:07:22 rpc -- rpc/rpc.sh@84 -- # killprocess 99129 00:06:10.233 20:07:22 rpc -- common/autotest_common.sh@954 -- # '[' -z 99129 ']' 00:06:10.233 20:07:22 rpc -- common/autotest_common.sh@958 -- # kill -0 99129 00:06:10.233 20:07:22 rpc -- common/autotest_common.sh@959 -- # uname 00:06:10.233 20:07:22 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.233 20:07:22 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99129 00:06:10.233 20:07:22 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.233 20:07:22 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.233 20:07:22 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99129' 00:06:10.233 killing process with pid 99129 00:06:10.233 20:07:22 rpc -- common/autotest_common.sh@973 -- # kill 99129 00:06:10.233 20:07:22 rpc -- common/autotest_common.sh@978 -- # wait 99129 00:06:10.493 00:06:10.493 real 0m1.888s 00:06:10.493 user 0m2.348s 00:06:10.493 sys 0m0.603s 00:06:10.493 20:07:22 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.493 20:07:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.493 ************************************ 00:06:10.493 END TEST rpc 00:06:10.493 ************************************ 00:06:10.755 20:07:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:10.755 20:07:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.755 20:07:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.755 20:07:22 -- common/autotest_common.sh@10 -- # set +x 00:06:10.755 ************************************ 00:06:10.755 START TEST skip_rpc 00:06:10.755 ************************************ 00:06:10.755 20:07:22 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:10.755 * Looking for test storage... 00:06:10.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:10.755 20:07:22 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.755 20:07:22 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.755 20:07:22 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.755 20:07:22 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.755 20:07:22 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:10.755 20:07:22 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.755 20:07:22 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.755 --rc genhtml_branch_coverage=1 00:06:10.755 --rc genhtml_function_coverage=1 00:06:10.755 --rc genhtml_legend=1 00:06:10.755 --rc geninfo_all_blocks=1 00:06:10.755 --rc geninfo_unexecuted_blocks=1 00:06:10.755 00:06:10.755 ' 00:06:10.755 20:07:22 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.755 --rc genhtml_branch_coverage=1 00:06:10.755 --rc genhtml_function_coverage=1 00:06:10.755 --rc genhtml_legend=1 00:06:10.755 --rc geninfo_all_blocks=1 00:06:10.755 --rc geninfo_unexecuted_blocks=1 00:06:10.755 00:06:10.755 ' 00:06:10.755 20:07:22 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.755 --rc genhtml_branch_coverage=1 00:06:10.755 --rc genhtml_function_coverage=1 00:06:10.755 --rc genhtml_legend=1 00:06:10.755 --rc geninfo_all_blocks=1 00:06:10.755 --rc geninfo_unexecuted_blocks=1 00:06:10.755 00:06:10.755 ' 00:06:10.755 20:07:22 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.755 --rc genhtml_branch_coverage=1 00:06:10.755 --rc genhtml_function_coverage=1 00:06:10.755 --rc genhtml_legend=1 00:06:10.755 --rc geninfo_all_blocks=1 00:06:10.755 --rc geninfo_unexecuted_blocks=1 00:06:10.755 00:06:10.755 ' 00:06:10.755 20:07:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:10.755 20:07:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.755 20:07:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:10.755 20:07:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.755 20:07:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.755 20:07:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.755 ************************************ 00:06:10.755 START TEST skip_rpc 00:06:10.755 ************************************ 00:06:10.755 20:07:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:10.755 20:07:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=99459 00:06:10.755 20:07:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:10.755 20:07:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.755 20:07:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:10.755 [2024-11-18 20:07:22.755559] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:10.755 [2024-11-18 20:07:22.755654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99459 ] 00:06:11.016 [2024-11-18 20:07:22.819142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.016 [2024-11-18 20:07:22.863470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:16.280 20:07:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 99459 00:06:16.281 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 99459 ']' 00:06:16.281 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 99459 00:06:16.281 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:16.281 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.281 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99459 00:06:16.281 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.281 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.281 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99459' 00:06:16.281 killing process with pid 99459 00:06:16.281 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 99459 00:06:16.281 20:07:27 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 99459 00:06:16.281 00:06:16.281 real 0m5.425s 00:06:16.281 user 0m5.136s 00:06:16.281 sys 0m0.308s 00:06:16.281 20:07:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.281 20:07:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.281 ************************************ 00:06:16.281 END TEST skip_rpc 00:06:16.281 ************************************ 00:06:16.281 20:07:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:16.281 20:07:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.281 20:07:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.281 20:07:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.281 ************************************ 00:06:16.281 START TEST skip_rpc_with_json 00:06:16.281 ************************************ 00:06:16.281 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:16.281 20:07:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:16.281 20:07:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=100146 00:06:16.281 20:07:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.281 20:07:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.281 20:07:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 100146 00:06:16.281 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 100146 ']' 00:06:16.281 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.281 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.281 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.281 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.281 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.281 [2024-11-18 20:07:28.239006] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:16.281 [2024-11-18 20:07:28.239086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100146 ] 00:06:16.540 [2024-11-18 20:07:28.306812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.540 [2024-11-18 20:07:28.355272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.799 [2024-11-18 20:07:28.617346] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:16.799 request: 00:06:16.799 { 00:06:16.799 "trtype": "tcp", 00:06:16.799 "method": "nvmf_get_transports", 00:06:16.799 "req_id": 1 00:06:16.799 } 00:06:16.799 Got JSON-RPC error response 00:06:16.799 response: 00:06:16.799 { 00:06:16.799 "code": -19, 00:06:16.799 "message": "No such device" 00:06:16.799 } 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.799 [2024-11-18 20:07:28.625452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.799 20:07:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.799 { 00:06:16.799 "subsystems": [ 00:06:16.799 { 00:06:16.799 "subsystem": "fsdev", 00:06:16.799 "config": [ 00:06:16.799 { 00:06:16.799 "method": "fsdev_set_opts", 00:06:16.799 "params": { 00:06:16.799 "fsdev_io_pool_size": 65535, 00:06:16.799 "fsdev_io_cache_size": 256 00:06:16.799 } 00:06:16.799 } 00:06:16.799 ] 00:06:16.799 }, 00:06:16.799 { 00:06:16.799 "subsystem": "vfio_user_target", 00:06:16.799 "config": null 00:06:16.799 }, 00:06:16.799 { 00:06:16.799 "subsystem": "keyring", 00:06:16.799 "config": [] 00:06:16.799 }, 00:06:16.799 { 00:06:16.799 "subsystem": "iobuf", 00:06:16.799 "config": [ 00:06:16.799 { 00:06:16.799 "method": "iobuf_set_options", 00:06:16.799 "params": { 00:06:16.799 "small_pool_count": 8192, 00:06:16.799 "large_pool_count": 1024, 00:06:16.799 "small_bufsize": 8192, 00:06:16.799 "large_bufsize": 135168, 00:06:16.799 "enable_numa": false 00:06:16.799 } 00:06:16.799 } 00:06:16.799 ] 00:06:16.799 }, 00:06:16.799 { 00:06:16.799 "subsystem": "sock", 00:06:16.799 "config": [ 00:06:16.800 { 00:06:16.800 "method": "sock_set_default_impl", 00:06:16.800 "params": { 00:06:16.800 "impl_name": "posix" 00:06:16.800 } 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "method": "sock_impl_set_options", 00:06:16.800 "params": { 00:06:16.800 "impl_name": "ssl", 00:06:16.800 "recv_buf_size": 4096, 00:06:16.800 "send_buf_size": 4096, 00:06:16.800 "enable_recv_pipe": true, 00:06:16.800 "enable_quickack": false, 00:06:16.800 "enable_placement_id": 0, 00:06:16.800 "enable_zerocopy_send_server": true, 00:06:16.800 "enable_zerocopy_send_client": false, 00:06:16.800 "zerocopy_threshold": 0, 00:06:16.800 "tls_version": 0, 00:06:16.800 "enable_ktls": false 00:06:16.800 } 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "method": "sock_impl_set_options", 00:06:16.800 "params": { 00:06:16.800 "impl_name": "posix", 00:06:16.800 "recv_buf_size": 2097152, 00:06:16.800 "send_buf_size": 2097152, 00:06:16.800 "enable_recv_pipe": true, 00:06:16.800 "enable_quickack": false, 00:06:16.800 "enable_placement_id": 0, 00:06:16.800 "enable_zerocopy_send_server": true, 00:06:16.800 "enable_zerocopy_send_client": false, 00:06:16.800 "zerocopy_threshold": 0, 00:06:16.800 "tls_version": 0, 00:06:16.800 "enable_ktls": false 00:06:16.800 } 00:06:16.800 } 00:06:16.800 ] 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "subsystem": "vmd", 00:06:16.800 "config": [] 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "subsystem": "accel", 00:06:16.800 "config": [ 00:06:16.800 { 00:06:16.800 "method": "accel_set_options", 00:06:16.800 "params": { 00:06:16.800 "small_cache_size": 128, 00:06:16.800 "large_cache_size": 16, 00:06:16.800 "task_count": 2048, 00:06:16.800 "sequence_count": 2048, 00:06:16.800 "buf_count": 2048 00:06:16.800 } 00:06:16.800 } 00:06:16.800 ] 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "subsystem": "bdev", 00:06:16.800 "config": [ 00:06:16.800 { 00:06:16.800 "method": "bdev_set_options", 00:06:16.800 "params": { 00:06:16.800 "bdev_io_pool_size": 65535, 00:06:16.800 "bdev_io_cache_size": 256, 00:06:16.800 "bdev_auto_examine": true, 00:06:16.800 "iobuf_small_cache_size": 128, 00:06:16.800 "iobuf_large_cache_size": 16 00:06:16.800 } 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "method": "bdev_raid_set_options", 00:06:16.800 "params": { 00:06:16.800 "process_window_size_kb": 1024, 00:06:16.800 "process_max_bandwidth_mb_sec": 0 00:06:16.800 } 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "method": "bdev_iscsi_set_options", 00:06:16.800 "params": { 00:06:16.800 "timeout_sec": 30 00:06:16.800 } 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "method": "bdev_nvme_set_options", 00:06:16.800 "params": { 00:06:16.800 "action_on_timeout": "none", 00:06:16.800 "timeout_us": 0, 00:06:16.800 "timeout_admin_us": 0, 00:06:16.800 "keep_alive_timeout_ms": 10000, 00:06:16.800 "arbitration_burst": 0, 00:06:16.800 "low_priority_weight": 0, 00:06:16.800 "medium_priority_weight": 0, 00:06:16.800 "high_priority_weight": 0, 00:06:16.800 "nvme_adminq_poll_period_us": 10000, 00:06:16.800 "nvme_ioq_poll_period_us": 0, 00:06:16.800 "io_queue_requests": 0, 00:06:16.800 "delay_cmd_submit": true, 00:06:16.800 "transport_retry_count": 4, 00:06:16.800 "bdev_retry_count": 3, 00:06:16.800 "transport_ack_timeout": 0, 00:06:16.800 "ctrlr_loss_timeout_sec": 0, 00:06:16.800 "reconnect_delay_sec": 0, 00:06:16.800 "fast_io_fail_timeout_sec": 0, 00:06:16.800 "disable_auto_failback": false, 00:06:16.800 "generate_uuids": false, 00:06:16.800 "transport_tos": 0, 00:06:16.800 "nvme_error_stat": false, 00:06:16.800 "rdma_srq_size": 0, 00:06:16.800 "io_path_stat": false, 00:06:16.800 "allow_accel_sequence": false, 00:06:16.800 "rdma_max_cq_size": 0, 00:06:16.800 "rdma_cm_event_timeout_ms": 0, 00:06:16.800 "dhchap_digests": [ 00:06:16.800 "sha256", 00:06:16.800 "sha384", 00:06:16.800 "sha512" 00:06:16.800 ], 00:06:16.800 "dhchap_dhgroups": [ 00:06:16.800 "null", 00:06:16.800 "ffdhe2048", 00:06:16.800 "ffdhe3072", 00:06:16.800 "ffdhe4096", 00:06:16.800 "ffdhe6144", 00:06:16.800 "ffdhe8192" 00:06:16.800 ] 00:06:16.800 } 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "method": "bdev_nvme_set_hotplug", 00:06:16.800 "params": { 00:06:16.800 "period_us": 100000, 00:06:16.800 "enable": false 00:06:16.800 } 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "method": "bdev_wait_for_examine" 00:06:16.800 } 00:06:16.800 ] 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "subsystem": "scsi", 00:06:16.800 "config": null 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "subsystem": "scheduler", 00:06:16.800 "config": [ 00:06:16.800 { 00:06:16.800 "method": "framework_set_scheduler", 00:06:16.800 "params": { 00:06:16.800 "name": "static" 00:06:16.800 } 00:06:16.800 } 00:06:16.800 ] 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "subsystem": "vhost_scsi", 00:06:16.800 "config": [] 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "subsystem": "vhost_blk", 00:06:16.800 "config": [] 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "subsystem": "ublk", 00:06:16.800 "config": [] 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "subsystem": "nbd", 00:06:16.800 "config": [] 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "subsystem": "nvmf", 00:06:16.800 "config": [ 00:06:16.800 { 00:06:16.800 "method": "nvmf_set_config", 00:06:16.800 "params": { 00:06:16.800 "discovery_filter": "match_any", 00:06:16.800 "admin_cmd_passthru": { 00:06:16.800 "identify_ctrlr": false 00:06:16.800 }, 00:06:16.800 "dhchap_digests": [ 00:06:16.800 "sha256", 00:06:16.800 "sha384", 00:06:16.800 "sha512" 00:06:16.800 ], 00:06:16.800 "dhchap_dhgroups": [ 00:06:16.800 "null", 00:06:16.800 "ffdhe2048", 00:06:16.800 "ffdhe3072", 00:06:16.800 "ffdhe4096", 00:06:16.800 "ffdhe6144", 00:06:16.800 "ffdhe8192" 00:06:16.800 ] 00:06:16.800 } 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "method": "nvmf_set_max_subsystems", 00:06:16.800 "params": { 00:06:16.800 "max_subsystems": 1024 00:06:16.800 } 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "method": "nvmf_set_crdt", 00:06:16.800 "params": { 00:06:16.800 "crdt1": 0, 00:06:16.800 "crdt2": 0, 00:06:16.800 "crdt3": 0 00:06:16.800 } 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "method": "nvmf_create_transport", 00:06:16.800 "params": { 00:06:16.800 "trtype": "TCP", 00:06:16.800 "max_queue_depth": 128, 00:06:16.800 "max_io_qpairs_per_ctrlr": 127, 00:06:16.800 "in_capsule_data_size": 4096, 00:06:16.800 "max_io_size": 131072, 00:06:16.800 "io_unit_size": 131072, 00:06:16.800 "max_aq_depth": 128, 00:06:16.800 "num_shared_buffers": 511, 00:06:16.800 "buf_cache_size": 4294967295, 00:06:16.800 "dif_insert_or_strip": false, 00:06:16.800 "zcopy": false, 00:06:16.800 "c2h_success": true, 00:06:16.800 "sock_priority": 0, 00:06:16.800 "abort_timeout_sec": 1, 00:06:16.800 "ack_timeout": 0, 00:06:16.800 "data_wr_pool_size": 0 00:06:16.800 } 00:06:16.800 } 00:06:16.800 ] 00:06:16.800 }, 00:06:16.800 { 00:06:16.800 "subsystem": "iscsi", 00:06:16.800 "config": [ 00:06:16.800 { 00:06:16.800 "method": "iscsi_set_options", 00:06:16.800 "params": { 00:06:16.800 "node_base": "iqn.2016-06.io.spdk", 00:06:16.800 "max_sessions": 128, 00:06:16.800 "max_connections_per_session": 2, 00:06:16.800 "max_queue_depth": 64, 00:06:16.800 "default_time2wait": 2, 00:06:16.800 "default_time2retain": 20, 00:06:16.800 "first_burst_length": 8192, 00:06:16.800 "immediate_data": true, 00:06:16.800 "allow_duplicated_isid": false, 00:06:16.801 "error_recovery_level": 0, 00:06:16.801 "nop_timeout": 60, 00:06:16.801 "nop_in_interval": 30, 00:06:16.801 "disable_chap": false, 00:06:16.801 "require_chap": false, 00:06:16.801 "mutual_chap": false, 00:06:16.801 "chap_group": 0, 00:06:16.801 "max_large_datain_per_connection": 64, 00:06:16.801 "max_r2t_per_connection": 4, 00:06:16.801 "pdu_pool_size": 36864, 00:06:16.801 "immediate_data_pool_size": 16384, 00:06:16.801 "data_out_pool_size": 2048 00:06:16.801 } 00:06:16.801 } 00:06:16.801 ] 00:06:16.801 } 00:06:16.801 ] 00:06:16.801 } 00:06:16.801 20:07:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:16.801 20:07:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 100146 00:06:16.801 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 100146 ']' 00:06:16.801 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 100146 00:06:16.801 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:16.801 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.801 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100146 00:06:17.059 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.059 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.059 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100146' 00:06:17.059 killing process with pid 100146 00:06:17.059 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 100146 00:06:17.059 20:07:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 100146 00:06:17.316 20:07:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=100283 00:06:17.316 20:07:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:17.316 20:07:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:22.580 20:07:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 100283 00:06:22.580 20:07:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 100283 ']' 00:06:22.580 20:07:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 100283 00:06:22.580 20:07:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:22.580 20:07:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.580 20:07:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100283 00:06:22.580 20:07:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.580 20:07:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.580 20:07:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100283' 00:06:22.580 killing process with pid 100283 00:06:22.580 20:07:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 100283 00:06:22.580 20:07:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 100283 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:22.841 00:06:22.841 real 0m6.455s 00:06:22.841 user 0m6.101s 00:06:22.841 sys 0m0.698s 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.841 ************************************ 00:06:22.841 END TEST skip_rpc_with_json 00:06:22.841 ************************************ 00:06:22.841 20:07:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:22.841 20:07:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.841 20:07:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.841 20:07:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.841 ************************************ 00:06:22.841 START TEST skip_rpc_with_delay 00:06:22.841 ************************************ 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.841 [2024-11-18 20:07:34.744053] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:22.841 00:06:22.841 real 0m0.072s 00:06:22.841 user 0m0.042s 00:06:22.841 sys 0m0.030s 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.841 20:07:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:22.841 ************************************ 00:06:22.841 END TEST skip_rpc_with_delay 00:06:22.841 ************************************ 00:06:22.841 20:07:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:22.841 20:07:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:22.841 20:07:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:22.841 20:07:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.841 20:07:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.841 20:07:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.841 ************************************ 00:06:22.841 START TEST exit_on_failed_rpc_init 00:06:22.841 ************************************ 00:06:22.841 20:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:22.841 20:07:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=100999 00:06:22.841 20:07:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.841 20:07:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 100999 00:06:22.841 20:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 100999 ']' 00:06:22.841 20:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.841 20:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.841 20:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.841 20:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.841 20:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:23.101 [2024-11-18 20:07:34.861201] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:23.101 [2024-11-18 20:07:34.861298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100999 ] 00:06:23.101 [2024-11-18 20:07:34.926633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.101 [2024-11-18 20:07:34.971223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:23.361 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.361 [2024-11-18 20:07:35.279970] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:23.361 [2024-11-18 20:07:35.280056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101009 ] 00:06:23.361 [2024-11-18 20:07:35.347552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.620 [2024-11-18 20:07:35.396105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.620 [2024-11-18 20:07:35.396202] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:23.620 [2024-11-18 20:07:35.396221] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:23.620 [2024-11-18 20:07:35.396232] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 100999 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 100999 ']' 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 100999 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100999 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100999' 00:06:23.620 killing process with pid 100999 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 100999 00:06:23.620 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 100999 00:06:23.881 00:06:23.881 real 0m1.060s 00:06:23.881 user 0m1.167s 00:06:23.881 sys 0m0.414s 00:06:23.881 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.881 20:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:23.881 ************************************ 00:06:23.881 END TEST exit_on_failed_rpc_init 00:06:23.881 ************************************ 00:06:24.142 20:07:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:24.142 00:06:24.142 real 0m13.351s 00:06:24.142 user 0m12.612s 00:06:24.142 sys 0m1.643s 00:06:24.142 20:07:35 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.142 20:07:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.142 ************************************ 00:06:24.142 END TEST skip_rpc 00:06:24.142 ************************************ 00:06:24.142 20:07:35 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:24.142 20:07:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.142 20:07:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.142 20:07:35 -- common/autotest_common.sh@10 -- # set +x 00:06:24.142 ************************************ 00:06:24.142 START TEST rpc_client 00:06:24.142 ************************************ 00:06:24.142 20:07:35 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:24.142 * Looking for test storage... 00:06:24.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:24.142 20:07:35 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.142 20:07:35 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.142 20:07:35 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.142 20:07:36 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.142 20:07:36 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:24.142 20:07:36 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.142 20:07:36 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.142 --rc genhtml_branch_coverage=1 00:06:24.142 --rc genhtml_function_coverage=1 00:06:24.142 --rc genhtml_legend=1 00:06:24.142 --rc geninfo_all_blocks=1 00:06:24.142 --rc geninfo_unexecuted_blocks=1 00:06:24.142 00:06:24.142 ' 00:06:24.142 20:07:36 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.142 --rc genhtml_branch_coverage=1 00:06:24.142 --rc genhtml_function_coverage=1 00:06:24.142 --rc genhtml_legend=1 00:06:24.142 --rc geninfo_all_blocks=1 00:06:24.142 --rc geninfo_unexecuted_blocks=1 00:06:24.142 00:06:24.142 ' 00:06:24.142 20:07:36 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.142 --rc genhtml_branch_coverage=1 00:06:24.142 --rc genhtml_function_coverage=1 00:06:24.142 --rc genhtml_legend=1 00:06:24.142 --rc geninfo_all_blocks=1 00:06:24.142 --rc geninfo_unexecuted_blocks=1 00:06:24.142 00:06:24.142 ' 00:06:24.142 20:07:36 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.142 --rc genhtml_branch_coverage=1 00:06:24.142 --rc genhtml_function_coverage=1 00:06:24.142 --rc genhtml_legend=1 00:06:24.142 --rc geninfo_all_blocks=1 00:06:24.142 --rc geninfo_unexecuted_blocks=1 00:06:24.142 00:06:24.142 ' 00:06:24.142 20:07:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:24.142 OK 00:06:24.142 20:07:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:24.142 00:06:24.142 real 0m0.157s 00:06:24.142 user 0m0.093s 00:06:24.142 sys 0m0.072s 00:06:24.142 20:07:36 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.142 20:07:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:24.142 ************************************ 00:06:24.142 END TEST rpc_client 00:06:24.142 ************************************ 00:06:24.142 20:07:36 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:24.142 20:07:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.142 20:07:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.142 20:07:36 -- common/autotest_common.sh@10 -- # set +x 00:06:24.142 ************************************ 00:06:24.142 START TEST json_config 00:06:24.142 ************************************ 00:06:24.142 20:07:36 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:24.401 20:07:36 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.401 20:07:36 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.401 20:07:36 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.401 20:07:36 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.401 20:07:36 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.401 20:07:36 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.401 20:07:36 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.401 20:07:36 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.401 20:07:36 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.402 20:07:36 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.402 20:07:36 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.402 20:07:36 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.402 20:07:36 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.402 20:07:36 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.402 20:07:36 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.402 20:07:36 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:24.402 20:07:36 json_config -- scripts/common.sh@345 -- # : 1 00:06:24.402 20:07:36 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.402 20:07:36 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.402 20:07:36 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:24.402 20:07:36 json_config -- scripts/common.sh@353 -- # local d=1 00:06:24.402 20:07:36 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.402 20:07:36 json_config -- scripts/common.sh@355 -- # echo 1 00:06:24.402 20:07:36 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.402 20:07:36 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:24.402 20:07:36 json_config -- scripts/common.sh@353 -- # local d=2 00:06:24.402 20:07:36 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.402 20:07:36 json_config -- scripts/common.sh@355 -- # echo 2 00:06:24.402 20:07:36 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.402 20:07:36 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.402 20:07:36 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.402 20:07:36 json_config -- scripts/common.sh@368 -- # return 0 00:06:24.402 20:07:36 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.402 20:07:36 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.402 --rc genhtml_branch_coverage=1 00:06:24.402 --rc genhtml_function_coverage=1 00:06:24.402 --rc genhtml_legend=1 00:06:24.402 --rc geninfo_all_blocks=1 00:06:24.402 --rc geninfo_unexecuted_blocks=1 00:06:24.402 00:06:24.402 ' 00:06:24.402 20:07:36 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.402 --rc genhtml_branch_coverage=1 00:06:24.402 --rc genhtml_function_coverage=1 00:06:24.402 --rc genhtml_legend=1 00:06:24.402 --rc geninfo_all_blocks=1 00:06:24.402 --rc geninfo_unexecuted_blocks=1 00:06:24.402 00:06:24.402 ' 00:06:24.402 20:07:36 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.402 --rc genhtml_branch_coverage=1 00:06:24.402 --rc genhtml_function_coverage=1 00:06:24.402 --rc genhtml_legend=1 00:06:24.402 --rc geninfo_all_blocks=1 00:06:24.402 --rc geninfo_unexecuted_blocks=1 00:06:24.402 00:06:24.402 ' 00:06:24.402 20:07:36 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.402 --rc genhtml_branch_coverage=1 00:06:24.402 --rc genhtml_function_coverage=1 00:06:24.402 --rc genhtml_legend=1 00:06:24.402 --rc geninfo_all_blocks=1 00:06:24.402 --rc geninfo_unexecuted_blocks=1 00:06:24.402 00:06:24.402 ' 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.402 20:07:36 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.402 20:07:36 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.402 20:07:36 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.402 20:07:36 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.402 20:07:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.402 20:07:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.402 20:07:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.402 20:07:36 json_config -- paths/export.sh@5 -- # export PATH 00:06:24.402 20:07:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@51 -- # : 0 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.402 20:07:36 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:24.402 INFO: JSON configuration test init 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:24.402 20:07:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.402 20:07:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:24.402 20:07:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.402 20:07:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.402 20:07:36 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:24.402 20:07:36 json_config -- json_config/common.sh@9 -- # local app=target 00:06:24.402 20:07:36 json_config -- json_config/common.sh@10 -- # shift 00:06:24.402 20:07:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:24.402 20:07:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:24.402 20:07:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:24.402 20:07:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:24.402 20:07:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:24.403 20:07:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=101269 00:06:24.403 20:07:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:24.403 Waiting for target to run... 00:06:24.403 20:07:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:24.403 20:07:36 json_config -- json_config/common.sh@25 -- # waitforlisten 101269 /var/tmp/spdk_tgt.sock 00:06:24.403 20:07:36 json_config -- common/autotest_common.sh@835 -- # '[' -z 101269 ']' 00:06:24.403 20:07:36 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:24.403 20:07:36 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.403 20:07:36 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:24.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:24.403 20:07:36 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.403 20:07:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.403 [2024-11-18 20:07:36.345328] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:24.403 [2024-11-18 20:07:36.345429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101269 ] 00:06:24.973 [2024-11-18 20:07:36.839758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.973 [2024-11-18 20:07:36.881900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.539 20:07:37 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.539 20:07:37 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:25.539 20:07:37 json_config -- json_config/common.sh@26 -- # echo '' 00:06:25.539 00:06:25.539 20:07:37 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:25.539 20:07:37 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:25.539 20:07:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.539 20:07:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.539 20:07:37 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:25.539 20:07:37 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:25.539 20:07:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:25.539 20:07:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.539 20:07:37 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:25.539 20:07:37 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:25.539 20:07:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:28.826 20:07:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:28.826 20:07:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:28.826 20:07:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@54 -- # sort 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:28.826 20:07:40 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:28.826 20:07:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:28.826 20:07:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.085 20:07:40 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:29.085 20:07:40 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:29.085 20:07:40 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:29.085 20:07:40 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:29.085 20:07:40 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:29.085 20:07:40 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:29.085 20:07:40 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:29.085 20:07:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:29.085 20:07:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.085 20:07:40 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:29.085 20:07:40 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:29.085 20:07:40 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:29.085 20:07:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:29.085 20:07:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:29.344 MallocForNvmf0 00:06:29.344 20:07:41 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:29.344 20:07:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:29.602 MallocForNvmf1 00:06:29.602 20:07:41 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:29.602 20:07:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:29.860 [2024-11-18 20:07:41.635151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.860 20:07:41 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:29.860 20:07:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.118 20:07:41 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:30.118 20:07:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:30.377 20:07:42 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:30.377 20:07:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:30.635 20:07:42 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:30.635 20:07:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:30.893 [2024-11-18 20:07:42.706454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:30.893 20:07:42 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:30.893 20:07:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:30.893 20:07:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.893 20:07:42 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:30.893 20:07:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:30.893 20:07:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.893 20:07:42 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:30.893 20:07:42 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:30.893 20:07:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:31.151 MallocBdevForConfigChangeCheck 00:06:31.151 20:07:43 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:31.151 20:07:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.151 20:07:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.151 20:07:43 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:31.151 20:07:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:31.718 20:07:43 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:31.718 INFO: shutting down applications... 00:06:31.718 20:07:43 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:31.718 20:07:43 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:31.718 20:07:43 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:31.718 20:07:43 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:33.619 Calling clear_iscsi_subsystem 00:06:33.619 Calling clear_nvmf_subsystem 00:06:33.619 Calling clear_nbd_subsystem 00:06:33.619 Calling clear_ublk_subsystem 00:06:33.619 Calling clear_vhost_blk_subsystem 00:06:33.619 Calling clear_vhost_scsi_subsystem 00:06:33.619 Calling clear_bdev_subsystem 00:06:33.619 20:07:45 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:33.619 20:07:45 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:33.619 20:07:45 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:33.620 20:07:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:33.620 20:07:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:33.620 20:07:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:33.620 20:07:45 json_config -- json_config/json_config.sh@352 -- # break 00:06:33.620 20:07:45 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:33.620 20:07:45 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:33.620 20:07:45 json_config -- json_config/common.sh@31 -- # local app=target 00:06:33.620 20:07:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:33.620 20:07:45 json_config -- json_config/common.sh@35 -- # [[ -n 101269 ]] 00:06:33.620 20:07:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 101269 00:06:33.620 20:07:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:33.620 20:07:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.620 20:07:45 json_config -- json_config/common.sh@41 -- # kill -0 101269 00:06:33.620 20:07:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:34.193 20:07:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:34.193 20:07:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.193 20:07:46 json_config -- json_config/common.sh@41 -- # kill -0 101269 00:06:34.193 20:07:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:34.193 20:07:46 json_config -- json_config/common.sh@43 -- # break 00:06:34.193 20:07:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:34.193 20:07:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:34.193 SPDK target shutdown done 00:06:34.193 20:07:46 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:34.193 INFO: relaunching applications... 00:06:34.193 20:07:46 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.193 20:07:46 json_config -- json_config/common.sh@9 -- # local app=target 00:06:34.193 20:07:46 json_config -- json_config/common.sh@10 -- # shift 00:06:34.193 20:07:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:34.193 20:07:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:34.193 20:07:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:34.193 20:07:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:34.193 20:07:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:34.193 20:07:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=102587 00:06:34.193 20:07:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.193 20:07:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:34.193 Waiting for target to run... 00:06:34.193 20:07:46 json_config -- json_config/common.sh@25 -- # waitforlisten 102587 /var/tmp/spdk_tgt.sock 00:06:34.193 20:07:46 json_config -- common/autotest_common.sh@835 -- # '[' -z 102587 ']' 00:06:34.193 20:07:46 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:34.193 20:07:46 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.193 20:07:46 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:34.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:34.193 20:07:46 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.193 20:07:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.193 [2024-11-18 20:07:46.117140] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:34.193 [2024-11-18 20:07:46.117257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102587 ] 00:06:34.762 [2024-11-18 20:07:46.641109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.762 [2024-11-18 20:07:46.680932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.053 [2024-11-18 20:07:49.727943] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.053 [2024-11-18 20:07:49.760383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:38.053 20:07:49 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.053 20:07:49 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:38.053 20:07:49 json_config -- json_config/common.sh@26 -- # echo '' 00:06:38.053 00:06:38.053 20:07:49 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:38.053 20:07:49 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:38.053 INFO: Checking if target configuration is the same... 00:06:38.053 20:07:49 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.053 20:07:49 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:38.053 20:07:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:38.053 + '[' 2 -ne 2 ']' 00:06:38.053 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:38.053 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:38.053 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:38.053 +++ basename /dev/fd/62 00:06:38.053 ++ mktemp /tmp/62.XXX 00:06:38.053 + tmp_file_1=/tmp/62.2sP 00:06:38.053 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.053 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:38.053 + tmp_file_2=/tmp/spdk_tgt_config.json.LI9 00:06:38.053 + ret=0 00:06:38.053 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:38.311 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:38.311 + diff -u /tmp/62.2sP /tmp/spdk_tgt_config.json.LI9 00:06:38.311 + echo 'INFO: JSON config files are the same' 00:06:38.311 INFO: JSON config files are the same 00:06:38.311 + rm /tmp/62.2sP /tmp/spdk_tgt_config.json.LI9 00:06:38.311 + exit 0 00:06:38.311 20:07:50 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:38.311 20:07:50 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:38.311 INFO: changing configuration and checking if this can be detected... 00:06:38.311 20:07:50 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:38.311 20:07:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:38.569 20:07:50 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.569 20:07:50 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:38.569 20:07:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:38.569 + '[' 2 -ne 2 ']' 00:06:38.569 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:38.569 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:38.569 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:38.569 +++ basename /dev/fd/62 00:06:38.569 ++ mktemp /tmp/62.XXX 00:06:38.569 + tmp_file_1=/tmp/62.Loq 00:06:38.569 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.569 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:38.569 + tmp_file_2=/tmp/spdk_tgt_config.json.U4x 00:06:38.569 + ret=0 00:06:38.569 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:39.134 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:39.134 + diff -u /tmp/62.Loq /tmp/spdk_tgt_config.json.U4x 00:06:39.134 + ret=1 00:06:39.134 + echo '=== Start of file: /tmp/62.Loq ===' 00:06:39.134 + cat /tmp/62.Loq 00:06:39.134 + echo '=== End of file: /tmp/62.Loq ===' 00:06:39.134 + echo '' 00:06:39.134 + echo '=== Start of file: /tmp/spdk_tgt_config.json.U4x ===' 00:06:39.134 + cat /tmp/spdk_tgt_config.json.U4x 00:06:39.134 + echo '=== End of file: /tmp/spdk_tgt_config.json.U4x ===' 00:06:39.134 + echo '' 00:06:39.134 + rm /tmp/62.Loq /tmp/spdk_tgt_config.json.U4x 00:06:39.134 + exit 1 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:39.134 INFO: configuration change detected. 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@324 -- # [[ -n 102587 ]] 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.134 20:07:51 json_config -- json_config/json_config.sh@330 -- # killprocess 102587 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@954 -- # '[' -z 102587 ']' 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@958 -- # kill -0 102587 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@959 -- # uname 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102587 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102587' 00:06:39.134 killing process with pid 102587 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@973 -- # kill 102587 00:06:39.134 20:07:51 json_config -- common/autotest_common.sh@978 -- # wait 102587 00:06:41.036 20:07:52 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:41.036 20:07:52 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:41.036 20:07:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.036 20:07:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.036 20:07:52 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:41.036 20:07:52 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:41.036 INFO: Success 00:06:41.036 00:06:41.036 real 0m16.589s 00:06:41.036 user 0m18.613s 00:06:41.036 sys 0m2.266s 00:06:41.036 20:07:52 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.036 20:07:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.036 ************************************ 00:06:41.036 END TEST json_config 00:06:41.036 ************************************ 00:06:41.036 20:07:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:41.036 20:07:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.036 20:07:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.036 20:07:52 -- common/autotest_common.sh@10 -- # set +x 00:06:41.036 ************************************ 00:06:41.036 START TEST json_config_extra_key 00:06:41.036 ************************************ 00:06:41.036 20:07:52 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:41.036 20:07:52 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.036 20:07:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.036 20:07:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.036 20:07:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:41.036 20:07:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:41.037 20:07:52 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.037 20:07:52 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.037 --rc genhtml_branch_coverage=1 00:06:41.037 --rc genhtml_function_coverage=1 00:06:41.037 --rc genhtml_legend=1 00:06:41.037 --rc geninfo_all_blocks=1 00:06:41.037 --rc geninfo_unexecuted_blocks=1 00:06:41.037 00:06:41.037 ' 00:06:41.037 20:07:52 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.037 --rc genhtml_branch_coverage=1 00:06:41.037 --rc genhtml_function_coverage=1 00:06:41.037 --rc genhtml_legend=1 00:06:41.037 --rc geninfo_all_blocks=1 00:06:41.037 --rc geninfo_unexecuted_blocks=1 00:06:41.037 00:06:41.037 ' 00:06:41.037 20:07:52 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.037 --rc genhtml_branch_coverage=1 00:06:41.037 --rc genhtml_function_coverage=1 00:06:41.037 --rc genhtml_legend=1 00:06:41.037 --rc geninfo_all_blocks=1 00:06:41.037 --rc geninfo_unexecuted_blocks=1 00:06:41.037 00:06:41.037 ' 00:06:41.037 20:07:52 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.037 --rc genhtml_branch_coverage=1 00:06:41.037 --rc genhtml_function_coverage=1 00:06:41.037 --rc genhtml_legend=1 00:06:41.037 --rc geninfo_all_blocks=1 00:06:41.037 --rc geninfo_unexecuted_blocks=1 00:06:41.037 00:06:41.037 ' 00:06:41.037 20:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.037 20:07:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.037 20:07:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.037 20:07:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.037 20:07:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.037 20:07:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:41.037 20:07:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.037 20:07:52 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.037 20:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:41.037 20:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:41.037 20:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:41.037 20:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:41.037 20:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:41.037 20:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:41.037 20:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:41.037 20:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:41.037 20:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:41.037 20:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:41.037 20:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:41.037 INFO: launching applications... 00:06:41.037 20:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:41.037 20:07:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:41.037 20:07:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:41.037 20:07:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:41.038 20:07:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:41.038 20:07:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:41.038 20:07:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.038 20:07:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.038 20:07:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=103508 00:06:41.038 20:07:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:41.038 20:07:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:41.038 Waiting for target to run... 00:06:41.038 20:07:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 103508 /var/tmp/spdk_tgt.sock 00:06:41.038 20:07:52 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 103508 ']' 00:06:41.038 20:07:52 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:41.038 20:07:52 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.038 20:07:52 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:41.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:41.038 20:07:52 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.038 20:07:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:41.038 [2024-11-18 20:07:52.977968] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:41.038 [2024-11-18 20:07:52.978052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103508 ] 00:06:41.606 [2024-11-18 20:07:53.316744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.606 [2024-11-18 20:07:53.348113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.173 20:07:53 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.173 20:07:53 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:42.173 20:07:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:42.173 00:06:42.173 20:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:42.173 INFO: shutting down applications... 00:06:42.173 20:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:42.173 20:07:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:42.173 20:07:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:42.173 20:07:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 103508 ]] 00:06:42.173 20:07:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 103508 00:06:42.173 20:07:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:42.173 20:07:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.173 20:07:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 103508 00:06:42.173 20:07:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:42.742 20:07:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:42.742 20:07:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.742 20:07:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 103508 00:06:42.742 20:07:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:42.742 20:07:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:42.742 20:07:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:42.742 20:07:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:42.742 SPDK target shutdown done 00:06:42.742 20:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:42.742 Success 00:06:42.742 00:06:42.742 real 0m1.685s 00:06:42.742 user 0m1.660s 00:06:42.742 sys 0m0.442s 00:06:42.742 20:07:54 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.742 20:07:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:42.742 ************************************ 00:06:42.742 END TEST json_config_extra_key 00:06:42.742 ************************************ 00:06:42.742 20:07:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:42.742 20:07:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.742 20:07:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.742 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:06:42.742 ************************************ 00:06:42.742 START TEST alias_rpc 00:06:42.742 ************************************ 00:06:42.742 20:07:54 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:42.742 * Looking for test storage... 00:06:42.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:42.742 20:07:54 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.742 20:07:54 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.742 20:07:54 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.742 20:07:54 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.742 20:07:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.742 20:07:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.742 20:07:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.742 20:07:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.742 20:07:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.742 20:07:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.742 20:07:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.742 20:07:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.742 20:07:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.742 20:07:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.743 20:07:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:42.743 20:07:54 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.743 20:07:54 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.743 --rc genhtml_branch_coverage=1 00:06:42.743 --rc genhtml_function_coverage=1 00:06:42.743 --rc genhtml_legend=1 00:06:42.743 --rc geninfo_all_blocks=1 00:06:42.743 --rc geninfo_unexecuted_blocks=1 00:06:42.743 00:06:42.743 ' 00:06:42.743 20:07:54 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.743 --rc genhtml_branch_coverage=1 00:06:42.743 --rc genhtml_function_coverage=1 00:06:42.743 --rc genhtml_legend=1 00:06:42.743 --rc geninfo_all_blocks=1 00:06:42.743 --rc geninfo_unexecuted_blocks=1 00:06:42.743 00:06:42.743 ' 00:06:42.743 20:07:54 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.743 --rc genhtml_branch_coverage=1 00:06:42.743 --rc genhtml_function_coverage=1 00:06:42.743 --rc genhtml_legend=1 00:06:42.743 --rc geninfo_all_blocks=1 00:06:42.743 --rc geninfo_unexecuted_blocks=1 00:06:42.743 00:06:42.743 ' 00:06:42.743 20:07:54 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.743 --rc genhtml_branch_coverage=1 00:06:42.743 --rc genhtml_function_coverage=1 00:06:42.743 --rc genhtml_legend=1 00:06:42.743 --rc geninfo_all_blocks=1 00:06:42.743 --rc geninfo_unexecuted_blocks=1 00:06:42.743 00:06:42.743 ' 00:06:42.743 20:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:42.743 20:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=103824 00:06:42.743 20:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:42.743 20:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 103824 00:06:42.743 20:07:54 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 103824 ']' 00:06:42.743 20:07:54 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.743 20:07:54 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.743 20:07:54 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.743 20:07:54 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.743 20:07:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.743 [2024-11-18 20:07:54.718534] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:42.743 [2024-11-18 20:07:54.718663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103824 ] 00:06:43.002 [2024-11-18 20:07:54.787214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.002 [2024-11-18 20:07:54.833017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.261 20:07:55 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.261 20:07:55 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:43.261 20:07:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:43.520 20:07:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 103824 00:06:43.520 20:07:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 103824 ']' 00:06:43.520 20:07:55 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 103824 00:06:43.520 20:07:55 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:43.520 20:07:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.520 20:07:55 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103824 00:06:43.520 20:07:55 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.520 20:07:55 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.520 20:07:55 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103824' 00:06:43.520 killing process with pid 103824 00:06:43.520 20:07:55 alias_rpc -- common/autotest_common.sh@973 -- # kill 103824 00:06:43.520 20:07:55 alias_rpc -- common/autotest_common.sh@978 -- # wait 103824 00:06:43.779 00:06:43.779 real 0m1.265s 00:06:43.779 user 0m1.371s 00:06:43.779 sys 0m0.440s 00:06:43.779 20:07:55 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.779 20:07:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.779 ************************************ 00:06:43.779 END TEST alias_rpc 00:06:43.779 ************************************ 00:06:44.039 20:07:55 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:44.039 20:07:55 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:44.039 20:07:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.039 20:07:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.039 20:07:55 -- common/autotest_common.sh@10 -- # set +x 00:06:44.039 ************************************ 00:06:44.039 START TEST spdkcli_tcp 00:06:44.039 ************************************ 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:44.039 * Looking for test storage... 00:06:44.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.039 20:07:55 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.039 --rc genhtml_branch_coverage=1 00:06:44.039 --rc genhtml_function_coverage=1 00:06:44.039 --rc genhtml_legend=1 00:06:44.039 --rc geninfo_all_blocks=1 00:06:44.039 --rc geninfo_unexecuted_blocks=1 00:06:44.039 00:06:44.039 ' 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.039 --rc genhtml_branch_coverage=1 00:06:44.039 --rc genhtml_function_coverage=1 00:06:44.039 --rc genhtml_legend=1 00:06:44.039 --rc geninfo_all_blocks=1 00:06:44.039 --rc geninfo_unexecuted_blocks=1 00:06:44.039 00:06:44.039 ' 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.039 --rc genhtml_branch_coverage=1 00:06:44.039 --rc genhtml_function_coverage=1 00:06:44.039 --rc genhtml_legend=1 00:06:44.039 --rc geninfo_all_blocks=1 00:06:44.039 --rc geninfo_unexecuted_blocks=1 00:06:44.039 00:06:44.039 ' 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.039 --rc genhtml_branch_coverage=1 00:06:44.039 --rc genhtml_function_coverage=1 00:06:44.039 --rc genhtml_legend=1 00:06:44.039 --rc geninfo_all_blocks=1 00:06:44.039 --rc geninfo_unexecuted_blocks=1 00:06:44.039 00:06:44.039 ' 00:06:44.039 20:07:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:44.039 20:07:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:44.039 20:07:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:44.039 20:07:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:44.039 20:07:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:44.039 20:07:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:44.039 20:07:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.039 20:07:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=104023 00:06:44.039 20:07:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:44.039 20:07:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 104023 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 104023 ']' 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.039 20:07:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.039 [2024-11-18 20:07:56.038009] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:44.040 [2024-11-18 20:07:56.038091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104023 ] 00:06:44.299 [2024-11-18 20:07:56.104376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.299 [2024-11-18 20:07:56.151957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.299 [2024-11-18 20:07:56.151962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.557 20:07:56 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.558 20:07:56 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:44.558 20:07:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=104032 00:06:44.558 20:07:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:44.558 20:07:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:44.816 [ 00:06:44.816 "bdev_malloc_delete", 00:06:44.816 "bdev_malloc_create", 00:06:44.816 "bdev_null_resize", 00:06:44.816 "bdev_null_delete", 00:06:44.816 "bdev_null_create", 00:06:44.816 "bdev_nvme_cuse_unregister", 00:06:44.816 "bdev_nvme_cuse_register", 00:06:44.816 "bdev_opal_new_user", 00:06:44.816 "bdev_opal_set_lock_state", 00:06:44.816 "bdev_opal_delete", 00:06:44.816 "bdev_opal_get_info", 00:06:44.816 "bdev_opal_create", 00:06:44.816 "bdev_nvme_opal_revert", 00:06:44.816 "bdev_nvme_opal_init", 00:06:44.816 "bdev_nvme_send_cmd", 00:06:44.816 "bdev_nvme_set_keys", 00:06:44.816 "bdev_nvme_get_path_iostat", 00:06:44.816 "bdev_nvme_get_mdns_discovery_info", 00:06:44.816 "bdev_nvme_stop_mdns_discovery", 00:06:44.816 "bdev_nvme_start_mdns_discovery", 00:06:44.816 "bdev_nvme_set_multipath_policy", 00:06:44.816 "bdev_nvme_set_preferred_path", 00:06:44.816 "bdev_nvme_get_io_paths", 00:06:44.816 "bdev_nvme_remove_error_injection", 00:06:44.816 "bdev_nvme_add_error_injection", 00:06:44.816 "bdev_nvme_get_discovery_info", 00:06:44.816 "bdev_nvme_stop_discovery", 00:06:44.816 "bdev_nvme_start_discovery", 00:06:44.816 "bdev_nvme_get_controller_health_info", 00:06:44.816 "bdev_nvme_disable_controller", 00:06:44.816 "bdev_nvme_enable_controller", 00:06:44.816 "bdev_nvme_reset_controller", 00:06:44.816 "bdev_nvme_get_transport_statistics", 00:06:44.816 "bdev_nvme_apply_firmware", 00:06:44.816 "bdev_nvme_detach_controller", 00:06:44.816 "bdev_nvme_get_controllers", 00:06:44.816 "bdev_nvme_attach_controller", 00:06:44.816 "bdev_nvme_set_hotplug", 00:06:44.816 "bdev_nvme_set_options", 00:06:44.816 "bdev_passthru_delete", 00:06:44.816 "bdev_passthru_create", 00:06:44.816 "bdev_lvol_set_parent_bdev", 00:06:44.816 "bdev_lvol_set_parent", 00:06:44.816 "bdev_lvol_check_shallow_copy", 00:06:44.816 "bdev_lvol_start_shallow_copy", 00:06:44.816 "bdev_lvol_grow_lvstore", 00:06:44.816 "bdev_lvol_get_lvols", 00:06:44.816 "bdev_lvol_get_lvstores", 00:06:44.816 "bdev_lvol_delete", 00:06:44.816 "bdev_lvol_set_read_only", 00:06:44.816 "bdev_lvol_resize", 00:06:44.816 "bdev_lvol_decouple_parent", 00:06:44.816 "bdev_lvol_inflate", 00:06:44.816 "bdev_lvol_rename", 00:06:44.816 "bdev_lvol_clone_bdev", 00:06:44.816 "bdev_lvol_clone", 00:06:44.816 "bdev_lvol_snapshot", 00:06:44.816 "bdev_lvol_create", 00:06:44.816 "bdev_lvol_delete_lvstore", 00:06:44.816 "bdev_lvol_rename_lvstore", 00:06:44.816 "bdev_lvol_create_lvstore", 00:06:44.816 "bdev_raid_set_options", 00:06:44.816 "bdev_raid_remove_base_bdev", 00:06:44.816 "bdev_raid_add_base_bdev", 00:06:44.816 "bdev_raid_delete", 00:06:44.816 "bdev_raid_create", 00:06:44.816 "bdev_raid_get_bdevs", 00:06:44.816 "bdev_error_inject_error", 00:06:44.816 "bdev_error_delete", 00:06:44.816 "bdev_error_create", 00:06:44.816 "bdev_split_delete", 00:06:44.816 "bdev_split_create", 00:06:44.816 "bdev_delay_delete", 00:06:44.816 "bdev_delay_create", 00:06:44.816 "bdev_delay_update_latency", 00:06:44.816 "bdev_zone_block_delete", 00:06:44.816 "bdev_zone_block_create", 00:06:44.816 "blobfs_create", 00:06:44.816 "blobfs_detect", 00:06:44.816 "blobfs_set_cache_size", 00:06:44.816 "bdev_aio_delete", 00:06:44.816 "bdev_aio_rescan", 00:06:44.816 "bdev_aio_create", 00:06:44.817 "bdev_ftl_set_property", 00:06:44.817 "bdev_ftl_get_properties", 00:06:44.817 "bdev_ftl_get_stats", 00:06:44.817 "bdev_ftl_unmap", 00:06:44.817 "bdev_ftl_unload", 00:06:44.817 "bdev_ftl_delete", 00:06:44.817 "bdev_ftl_load", 00:06:44.817 "bdev_ftl_create", 00:06:44.817 "bdev_virtio_attach_controller", 00:06:44.817 "bdev_virtio_scsi_get_devices", 00:06:44.817 "bdev_virtio_detach_controller", 00:06:44.817 "bdev_virtio_blk_set_hotplug", 00:06:44.817 "bdev_iscsi_delete", 00:06:44.817 "bdev_iscsi_create", 00:06:44.817 "bdev_iscsi_set_options", 00:06:44.817 "accel_error_inject_error", 00:06:44.817 "ioat_scan_accel_module", 00:06:44.817 "dsa_scan_accel_module", 00:06:44.817 "iaa_scan_accel_module", 00:06:44.817 "vfu_virtio_create_fs_endpoint", 00:06:44.817 "vfu_virtio_create_scsi_endpoint", 00:06:44.817 "vfu_virtio_scsi_remove_target", 00:06:44.817 "vfu_virtio_scsi_add_target", 00:06:44.817 "vfu_virtio_create_blk_endpoint", 00:06:44.817 "vfu_virtio_delete_endpoint", 00:06:44.817 "keyring_file_remove_key", 00:06:44.817 "keyring_file_add_key", 00:06:44.817 "keyring_linux_set_options", 00:06:44.817 "fsdev_aio_delete", 00:06:44.817 "fsdev_aio_create", 00:06:44.817 "iscsi_get_histogram", 00:06:44.817 "iscsi_enable_histogram", 00:06:44.817 "iscsi_set_options", 00:06:44.817 "iscsi_get_auth_groups", 00:06:44.817 "iscsi_auth_group_remove_secret", 00:06:44.817 "iscsi_auth_group_add_secret", 00:06:44.817 "iscsi_delete_auth_group", 00:06:44.817 "iscsi_create_auth_group", 00:06:44.817 "iscsi_set_discovery_auth", 00:06:44.817 "iscsi_get_options", 00:06:44.817 "iscsi_target_node_request_logout", 00:06:44.817 "iscsi_target_node_set_redirect", 00:06:44.817 "iscsi_target_node_set_auth", 00:06:44.817 "iscsi_target_node_add_lun", 00:06:44.817 "iscsi_get_stats", 00:06:44.817 "iscsi_get_connections", 00:06:44.817 "iscsi_portal_group_set_auth", 00:06:44.817 "iscsi_start_portal_group", 00:06:44.817 "iscsi_delete_portal_group", 00:06:44.817 "iscsi_create_portal_group", 00:06:44.817 "iscsi_get_portal_groups", 00:06:44.817 "iscsi_delete_target_node", 00:06:44.817 "iscsi_target_node_remove_pg_ig_maps", 00:06:44.817 "iscsi_target_node_add_pg_ig_maps", 00:06:44.817 "iscsi_create_target_node", 00:06:44.817 "iscsi_get_target_nodes", 00:06:44.817 "iscsi_delete_initiator_group", 00:06:44.817 "iscsi_initiator_group_remove_initiators", 00:06:44.817 "iscsi_initiator_group_add_initiators", 00:06:44.817 "iscsi_create_initiator_group", 00:06:44.817 "iscsi_get_initiator_groups", 00:06:44.817 "nvmf_set_crdt", 00:06:44.817 "nvmf_set_config", 00:06:44.817 "nvmf_set_max_subsystems", 00:06:44.817 "nvmf_stop_mdns_prr", 00:06:44.817 "nvmf_publish_mdns_prr", 00:06:44.817 "nvmf_subsystem_get_listeners", 00:06:44.817 "nvmf_subsystem_get_qpairs", 00:06:44.817 "nvmf_subsystem_get_controllers", 00:06:44.817 "nvmf_get_stats", 00:06:44.817 "nvmf_get_transports", 00:06:44.817 "nvmf_create_transport", 00:06:44.817 "nvmf_get_targets", 00:06:44.817 "nvmf_delete_target", 00:06:44.817 "nvmf_create_target", 00:06:44.817 "nvmf_subsystem_allow_any_host", 00:06:44.817 "nvmf_subsystem_set_keys", 00:06:44.817 "nvmf_subsystem_remove_host", 00:06:44.817 "nvmf_subsystem_add_host", 00:06:44.817 "nvmf_ns_remove_host", 00:06:44.817 "nvmf_ns_add_host", 00:06:44.817 "nvmf_subsystem_remove_ns", 00:06:44.817 "nvmf_subsystem_set_ns_ana_group", 00:06:44.817 "nvmf_subsystem_add_ns", 00:06:44.817 "nvmf_subsystem_listener_set_ana_state", 00:06:44.817 "nvmf_discovery_get_referrals", 00:06:44.817 "nvmf_discovery_remove_referral", 00:06:44.817 "nvmf_discovery_add_referral", 00:06:44.817 "nvmf_subsystem_remove_listener", 00:06:44.817 "nvmf_subsystem_add_listener", 00:06:44.817 "nvmf_delete_subsystem", 00:06:44.817 "nvmf_create_subsystem", 00:06:44.817 "nvmf_get_subsystems", 00:06:44.817 "env_dpdk_get_mem_stats", 00:06:44.817 "nbd_get_disks", 00:06:44.817 "nbd_stop_disk", 00:06:44.817 "nbd_start_disk", 00:06:44.817 "ublk_recover_disk", 00:06:44.817 "ublk_get_disks", 00:06:44.817 "ublk_stop_disk", 00:06:44.817 "ublk_start_disk", 00:06:44.817 "ublk_destroy_target", 00:06:44.817 "ublk_create_target", 00:06:44.817 "virtio_blk_create_transport", 00:06:44.817 "virtio_blk_get_transports", 00:06:44.817 "vhost_controller_set_coalescing", 00:06:44.817 "vhost_get_controllers", 00:06:44.817 "vhost_delete_controller", 00:06:44.817 "vhost_create_blk_controller", 00:06:44.817 "vhost_scsi_controller_remove_target", 00:06:44.817 "vhost_scsi_controller_add_target", 00:06:44.817 "vhost_start_scsi_controller", 00:06:44.817 "vhost_create_scsi_controller", 00:06:44.817 "thread_set_cpumask", 00:06:44.817 "scheduler_set_options", 00:06:44.817 "framework_get_governor", 00:06:44.817 "framework_get_scheduler", 00:06:44.817 "framework_set_scheduler", 00:06:44.817 "framework_get_reactors", 00:06:44.817 "thread_get_io_channels", 00:06:44.817 "thread_get_pollers", 00:06:44.817 "thread_get_stats", 00:06:44.817 "framework_monitor_context_switch", 00:06:44.817 "spdk_kill_instance", 00:06:44.817 "log_enable_timestamps", 00:06:44.817 "log_get_flags", 00:06:44.817 "log_clear_flag", 00:06:44.817 "log_set_flag", 00:06:44.817 "log_get_level", 00:06:44.817 "log_set_level", 00:06:44.817 "log_get_print_level", 00:06:44.817 "log_set_print_level", 00:06:44.817 "framework_enable_cpumask_locks", 00:06:44.817 "framework_disable_cpumask_locks", 00:06:44.817 "framework_wait_init", 00:06:44.817 "framework_start_init", 00:06:44.817 "scsi_get_devices", 00:06:44.817 "bdev_get_histogram", 00:06:44.817 "bdev_enable_histogram", 00:06:44.817 "bdev_set_qos_limit", 00:06:44.817 "bdev_set_qd_sampling_period", 00:06:44.817 "bdev_get_bdevs", 00:06:44.817 "bdev_reset_iostat", 00:06:44.817 "bdev_get_iostat", 00:06:44.817 "bdev_examine", 00:06:44.817 "bdev_wait_for_examine", 00:06:44.817 "bdev_set_options", 00:06:44.817 "accel_get_stats", 00:06:44.817 "accel_set_options", 00:06:44.817 "accel_set_driver", 00:06:44.817 "accel_crypto_key_destroy", 00:06:44.817 "accel_crypto_keys_get", 00:06:44.817 "accel_crypto_key_create", 00:06:44.817 "accel_assign_opc", 00:06:44.817 "accel_get_module_info", 00:06:44.817 "accel_get_opc_assignments", 00:06:44.817 "vmd_rescan", 00:06:44.817 "vmd_remove_device", 00:06:44.817 "vmd_enable", 00:06:44.817 "sock_get_default_impl", 00:06:44.817 "sock_set_default_impl", 00:06:44.817 "sock_impl_set_options", 00:06:44.817 "sock_impl_get_options", 00:06:44.817 "iobuf_get_stats", 00:06:44.817 "iobuf_set_options", 00:06:44.817 "keyring_get_keys", 00:06:44.817 "vfu_tgt_set_base_path", 00:06:44.817 "framework_get_pci_devices", 00:06:44.817 "framework_get_config", 00:06:44.817 "framework_get_subsystems", 00:06:44.817 "fsdev_set_opts", 00:06:44.817 "fsdev_get_opts", 00:06:44.817 "trace_get_info", 00:06:44.817 "trace_get_tpoint_group_mask", 00:06:44.817 "trace_disable_tpoint_group", 00:06:44.817 "trace_enable_tpoint_group", 00:06:44.817 "trace_clear_tpoint_mask", 00:06:44.817 "trace_set_tpoint_mask", 00:06:44.817 "notify_get_notifications", 00:06:44.817 "notify_get_types", 00:06:44.817 "spdk_get_version", 00:06:44.817 "rpc_get_methods" 00:06:44.817 ] 00:06:44.817 20:07:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:44.817 20:07:56 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.817 20:07:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.817 20:07:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:44.817 20:07:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 104023 00:06:44.817 20:07:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 104023 ']' 00:06:44.817 20:07:56 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 104023 00:06:44.817 20:07:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:44.817 20:07:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.817 20:07:56 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104023 00:06:44.817 20:07:56 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.817 20:07:56 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.817 20:07:56 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104023' 00:06:44.817 killing process with pid 104023 00:06:44.817 20:07:56 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 104023 00:06:44.817 20:07:56 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 104023 00:06:45.385 00:06:45.385 real 0m1.303s 00:06:45.385 user 0m2.349s 00:06:45.385 sys 0m0.495s 00:06:45.385 20:07:57 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.385 20:07:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.385 ************************************ 00:06:45.385 END TEST spdkcli_tcp 00:06:45.385 ************************************ 00:06:45.385 20:07:57 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:45.385 20:07:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.385 20:07:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.385 20:07:57 -- common/autotest_common.sh@10 -- # set +x 00:06:45.385 ************************************ 00:06:45.385 START TEST dpdk_mem_utility 00:06:45.385 ************************************ 00:06:45.385 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:45.385 * Looking for test storage... 00:06:45.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:45.385 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.385 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.385 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.385 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.385 20:07:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:45.386 20:07:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.386 20:07:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:45.386 20:07:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:45.386 20:07:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.386 20:07:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:45.386 20:07:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.386 20:07:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.386 20:07:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.386 20:07:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:45.386 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.386 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.386 --rc genhtml_branch_coverage=1 00:06:45.386 --rc genhtml_function_coverage=1 00:06:45.386 --rc genhtml_legend=1 00:06:45.386 --rc geninfo_all_blocks=1 00:06:45.386 --rc geninfo_unexecuted_blocks=1 00:06:45.386 00:06:45.386 ' 00:06:45.386 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.386 --rc genhtml_branch_coverage=1 00:06:45.386 --rc genhtml_function_coverage=1 00:06:45.386 --rc genhtml_legend=1 00:06:45.386 --rc geninfo_all_blocks=1 00:06:45.386 --rc geninfo_unexecuted_blocks=1 00:06:45.386 00:06:45.386 ' 00:06:45.386 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.386 --rc genhtml_branch_coverage=1 00:06:45.386 --rc genhtml_function_coverage=1 00:06:45.386 --rc genhtml_legend=1 00:06:45.386 --rc geninfo_all_blocks=1 00:06:45.386 --rc geninfo_unexecuted_blocks=1 00:06:45.386 00:06:45.386 ' 00:06:45.386 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.386 --rc genhtml_branch_coverage=1 00:06:45.386 --rc genhtml_function_coverage=1 00:06:45.386 --rc genhtml_legend=1 00:06:45.386 --rc geninfo_all_blocks=1 00:06:45.386 --rc geninfo_unexecuted_blocks=1 00:06:45.386 00:06:45.386 ' 00:06:45.386 20:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:45.386 20:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=104234 00:06:45.386 20:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:45.386 20:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 104234 00:06:45.386 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 104234 ']' 00:06:45.386 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.386 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.386 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.386 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.386 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:45.386 [2024-11-18 20:07:57.387658] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:45.386 [2024-11-18 20:07:57.387761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104234 ] 00:06:45.645 [2024-11-18 20:07:57.452740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.645 [2024-11-18 20:07:57.500754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.905 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.905 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:45.905 20:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:45.905 20:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:45.905 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.905 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:45.905 { 00:06:45.905 "filename": "/tmp/spdk_mem_dump.txt" 00:06:45.905 } 00:06:45.905 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.905 20:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:45.905 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:45.905 1 heaps totaling size 810.000000 MiB 00:06:45.905 size: 810.000000 MiB heap id: 0 00:06:45.905 end heaps---------- 00:06:45.905 9 mempools totaling size 595.772034 MiB 00:06:45.905 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:45.905 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:45.905 size: 92.545471 MiB name: bdev_io_104234 00:06:45.905 size: 50.003479 MiB name: msgpool_104234 00:06:45.905 size: 36.509338 MiB name: fsdev_io_104234 00:06:45.905 size: 21.763794 MiB name: PDU_Pool 00:06:45.905 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:45.905 size: 4.133484 MiB name: evtpool_104234 00:06:45.905 size: 0.026123 MiB name: Session_Pool 00:06:45.905 end mempools------- 00:06:45.905 6 memzones totaling size 4.142822 MiB 00:06:45.905 size: 1.000366 MiB name: RG_ring_0_104234 00:06:45.905 size: 1.000366 MiB name: RG_ring_1_104234 00:06:45.905 size: 1.000366 MiB name: RG_ring_4_104234 00:06:45.905 size: 1.000366 MiB name: RG_ring_5_104234 00:06:45.905 size: 0.125366 MiB name: RG_ring_2_104234 00:06:45.905 size: 0.015991 MiB name: RG_ring_3_104234 00:06:45.905 end memzones------- 00:06:45.905 20:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:45.905 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:45.905 list of free elements. size: 10.862488 MiB 00:06:45.905 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:45.905 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:45.905 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:45.905 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:45.905 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:45.905 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:45.905 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:45.905 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:45.905 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:45.905 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:45.905 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:45.905 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:45.905 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:45.905 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:45.905 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:45.905 list of standard malloc elements. size: 199.218628 MiB 00:06:45.905 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:45.905 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:45.905 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:45.905 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:45.905 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:45.905 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:45.905 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:45.905 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:45.905 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:45.905 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:45.905 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:45.905 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:45.905 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:45.905 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:45.905 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:45.905 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:45.905 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:45.905 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:45.905 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:45.905 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:45.905 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:45.905 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:45.905 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:45.905 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:45.905 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:45.905 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:45.905 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:45.905 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:45.905 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:45.905 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:45.905 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:45.905 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:45.905 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:45.905 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:45.905 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:45.905 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:45.905 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:45.905 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:45.905 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:45.905 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:45.905 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:45.905 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:45.905 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:45.905 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:45.905 list of memzone associated elements. size: 599.918884 MiB 00:06:45.905 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:45.905 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:45.905 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:45.905 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:45.905 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:45.905 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_104234_0 00:06:45.905 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:45.905 associated memzone info: size: 48.002930 MiB name: MP_msgpool_104234_0 00:06:45.905 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:45.905 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_104234_0 00:06:45.905 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:45.905 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:45.905 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:45.905 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:45.905 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:45.905 associated memzone info: size: 3.000122 MiB name: MP_evtpool_104234_0 00:06:45.905 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:45.905 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_104234 00:06:45.905 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:45.905 associated memzone info: size: 1.007996 MiB name: MP_evtpool_104234 00:06:45.905 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:45.905 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:45.905 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:45.905 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:45.906 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:45.906 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:45.906 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:45.906 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:45.906 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:45.906 associated memzone info: size: 1.000366 MiB name: RG_ring_0_104234 00:06:45.906 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:45.906 associated memzone info: size: 1.000366 MiB name: RG_ring_1_104234 00:06:45.906 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:45.906 associated memzone info: size: 1.000366 MiB name: RG_ring_4_104234 00:06:45.906 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:45.906 associated memzone info: size: 1.000366 MiB name: RG_ring_5_104234 00:06:45.906 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:45.906 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_104234 00:06:45.906 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:45.906 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_104234 00:06:45.906 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:45.906 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:45.906 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:45.906 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:45.906 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:45.906 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:45.906 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:45.906 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_104234 00:06:45.906 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:45.906 associated memzone info: size: 0.125366 MiB name: RG_ring_2_104234 00:06:45.906 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:45.906 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:45.906 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:45.906 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:45.906 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:45.906 associated memzone info: size: 0.015991 MiB name: RG_ring_3_104234 00:06:45.906 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:45.906 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:45.906 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:45.906 associated memzone info: size: 0.000183 MiB name: MP_msgpool_104234 00:06:45.906 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:45.906 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_104234 00:06:45.906 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:45.906 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_104234 00:06:45.906 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:45.906 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:45.906 20:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:45.906 20:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 104234 00:06:45.906 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 104234 ']' 00:06:45.906 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 104234 00:06:45.906 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:45.906 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.906 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104234 00:06:45.906 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.906 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.906 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104234' 00:06:45.906 killing process with pid 104234 00:06:45.906 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 104234 00:06:45.906 20:07:57 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 104234 00:06:46.473 00:06:46.473 real 0m1.091s 00:06:46.473 user 0m1.081s 00:06:46.473 sys 0m0.406s 00:06:46.473 20:07:58 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.473 20:07:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:46.473 ************************************ 00:06:46.474 END TEST dpdk_mem_utility 00:06:46.474 ************************************ 00:06:46.474 20:07:58 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:46.474 20:07:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.474 20:07:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.474 20:07:58 -- common/autotest_common.sh@10 -- # set +x 00:06:46.474 ************************************ 00:06:46.474 START TEST event 00:06:46.474 ************************************ 00:06:46.474 20:07:58 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:46.474 * Looking for test storage... 00:06:46.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:46.474 20:07:58 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.474 20:07:58 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.474 20:07:58 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.474 20:07:58 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.474 20:07:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.474 20:07:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.474 20:07:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.474 20:07:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.474 20:07:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.474 20:07:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.474 20:07:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.474 20:07:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.474 20:07:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.474 20:07:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.474 20:07:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.474 20:07:58 event -- scripts/common.sh@344 -- # case "$op" in 00:06:46.474 20:07:58 event -- scripts/common.sh@345 -- # : 1 00:06:46.474 20:07:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.474 20:07:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.474 20:07:58 event -- scripts/common.sh@365 -- # decimal 1 00:06:46.474 20:07:58 event -- scripts/common.sh@353 -- # local d=1 00:06:46.474 20:07:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.474 20:07:58 event -- scripts/common.sh@355 -- # echo 1 00:06:46.474 20:07:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.474 20:07:58 event -- scripts/common.sh@366 -- # decimal 2 00:06:46.474 20:07:58 event -- scripts/common.sh@353 -- # local d=2 00:06:46.474 20:07:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.474 20:07:58 event -- scripts/common.sh@355 -- # echo 2 00:06:46.474 20:07:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.474 20:07:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.474 20:07:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.474 20:07:58 event -- scripts/common.sh@368 -- # return 0 00:06:46.474 20:07:58 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.474 20:07:58 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.474 --rc genhtml_branch_coverage=1 00:06:46.474 --rc genhtml_function_coverage=1 00:06:46.474 --rc genhtml_legend=1 00:06:46.474 --rc geninfo_all_blocks=1 00:06:46.474 --rc geninfo_unexecuted_blocks=1 00:06:46.474 00:06:46.474 ' 00:06:46.474 20:07:58 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.474 --rc genhtml_branch_coverage=1 00:06:46.474 --rc genhtml_function_coverage=1 00:06:46.474 --rc genhtml_legend=1 00:06:46.474 --rc geninfo_all_blocks=1 00:06:46.474 --rc geninfo_unexecuted_blocks=1 00:06:46.474 00:06:46.474 ' 00:06:46.733 20:07:58 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.733 --rc genhtml_branch_coverage=1 00:06:46.733 --rc genhtml_function_coverage=1 00:06:46.733 --rc genhtml_legend=1 00:06:46.733 --rc geninfo_all_blocks=1 00:06:46.733 --rc geninfo_unexecuted_blocks=1 00:06:46.733 00:06:46.733 ' 00:06:46.733 20:07:58 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.733 --rc genhtml_branch_coverage=1 00:06:46.733 --rc genhtml_function_coverage=1 00:06:46.733 --rc genhtml_legend=1 00:06:46.733 --rc geninfo_all_blocks=1 00:06:46.733 --rc geninfo_unexecuted_blocks=1 00:06:46.733 00:06:46.733 ' 00:06:46.733 20:07:58 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:46.733 20:07:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:46.733 20:07:58 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:46.733 20:07:58 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:46.733 20:07:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.733 20:07:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.733 ************************************ 00:06:46.733 START TEST event_perf 00:06:46.733 ************************************ 00:06:46.733 20:07:58 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:46.733 Running I/O for 1 seconds...[2024-11-18 20:07:58.521714] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:46.733 [2024-11-18 20:07:58.521780] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104430 ] 00:06:46.733 [2024-11-18 20:07:58.587233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.733 [2024-11-18 20:07:58.635306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.733 [2024-11-18 20:07:58.635404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.733 [2024-11-18 20:07:58.635497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.733 [2024-11-18 20:07:58.635505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.695 Running I/O for 1 seconds... 00:06:47.695 lcore 0: 230582 00:06:47.695 lcore 1: 230583 00:06:47.695 lcore 2: 230583 00:06:47.695 lcore 3: 230581 00:06:47.695 done. 00:06:47.695 00:06:47.695 real 0m1.170s 00:06:47.695 user 0m4.099s 00:06:47.695 sys 0m0.066s 00:06:47.695 20:07:59 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.695 20:07:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.695 ************************************ 00:06:47.695 END TEST event_perf 00:06:47.696 ************************************ 00:06:47.696 20:07:59 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:47.696 20:07:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:47.696 20:07:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.696 20:07:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.954 ************************************ 00:06:47.954 START TEST event_reactor 00:06:47.954 ************************************ 00:06:47.954 20:07:59 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:47.954 [2024-11-18 20:07:59.732129] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:47.954 [2024-11-18 20:07:59.732195] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104591 ] 00:06:47.954 [2024-11-18 20:07:59.797723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.954 [2024-11-18 20:07:59.841138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.892 test_start 00:06:48.892 oneshot 00:06:48.892 tick 100 00:06:48.892 tick 100 00:06:48.892 tick 250 00:06:48.892 tick 100 00:06:48.892 tick 100 00:06:48.892 tick 100 00:06:48.892 tick 250 00:06:48.892 tick 500 00:06:48.892 tick 100 00:06:48.892 tick 100 00:06:48.892 tick 250 00:06:48.892 tick 100 00:06:48.892 tick 100 00:06:48.892 test_end 00:06:48.892 00:06:48.892 real 0m1.162s 00:06:48.892 user 0m1.097s 00:06:48.892 sys 0m0.060s 00:06:48.892 20:08:00 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.892 20:08:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:48.892 ************************************ 00:06:48.892 END TEST event_reactor 00:06:48.892 ************************************ 00:06:49.151 20:08:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:49.151 20:08:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:49.151 20:08:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.151 20:08:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.151 ************************************ 00:06:49.151 START TEST event_reactor_perf 00:06:49.151 ************************************ 00:06:49.151 20:08:00 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:49.151 [2024-11-18 20:08:00.945559] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:49.151 [2024-11-18 20:08:00.945627] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104750 ] 00:06:49.151 [2024-11-18 20:08:01.010287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.152 [2024-11-18 20:08:01.053456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.528 test_start 00:06:50.528 test_end 00:06:50.528 Performance: 448317 events per second 00:06:50.528 00:06:50.528 real 0m1.166s 00:06:50.528 user 0m1.099s 00:06:50.528 sys 0m0.062s 00:06:50.528 20:08:02 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.528 20:08:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.528 ************************************ 00:06:50.528 END TEST event_reactor_perf 00:06:50.528 ************************************ 00:06:50.528 20:08:02 event -- event/event.sh@49 -- # uname -s 00:06:50.528 20:08:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:50.528 20:08:02 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:50.528 20:08:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.528 20:08:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.528 20:08:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.528 ************************************ 00:06:50.528 START TEST event_scheduler 00:06:50.528 ************************************ 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:50.528 * Looking for test storage... 00:06:50.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.528 20:08:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.528 --rc genhtml_branch_coverage=1 00:06:50.528 --rc genhtml_function_coverage=1 00:06:50.528 --rc genhtml_legend=1 00:06:50.528 --rc geninfo_all_blocks=1 00:06:50.528 --rc geninfo_unexecuted_blocks=1 00:06:50.528 00:06:50.528 ' 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.528 --rc genhtml_branch_coverage=1 00:06:50.528 --rc genhtml_function_coverage=1 00:06:50.528 --rc genhtml_legend=1 00:06:50.528 --rc geninfo_all_blocks=1 00:06:50.528 --rc geninfo_unexecuted_blocks=1 00:06:50.528 00:06:50.528 ' 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.528 --rc genhtml_branch_coverage=1 00:06:50.528 --rc genhtml_function_coverage=1 00:06:50.528 --rc genhtml_legend=1 00:06:50.528 --rc geninfo_all_blocks=1 00:06:50.528 --rc geninfo_unexecuted_blocks=1 00:06:50.528 00:06:50.528 ' 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.528 --rc genhtml_branch_coverage=1 00:06:50.528 --rc genhtml_function_coverage=1 00:06:50.528 --rc genhtml_legend=1 00:06:50.528 --rc geninfo_all_blocks=1 00:06:50.528 --rc geninfo_unexecuted_blocks=1 00:06:50.528 00:06:50.528 ' 00:06:50.528 20:08:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:50.528 20:08:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=105053 00:06:50.528 20:08:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:50.528 20:08:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.528 20:08:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 105053 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 105053 ']' 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.528 20:08:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.528 [2024-11-18 20:08:02.345121] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:50.528 [2024-11-18 20:08:02.345220] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105053 ] 00:06:50.528 [2024-11-18 20:08:02.417344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.528 [2024-11-18 20:08:02.466306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.528 [2024-11-18 20:08:02.466415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.528 [2024-11-18 20:08:02.466503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.528 [2024-11-18 20:08:02.466506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.788 20:08:02 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.788 20:08:02 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:50.788 20:08:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:50.788 20:08:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.788 20:08:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.788 [2024-11-18 20:08:02.575401] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:50.788 [2024-11-18 20:08:02.575429] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:50.788 [2024-11-18 20:08:02.575446] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:50.788 [2024-11-18 20:08:02.575456] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:50.788 [2024-11-18 20:08:02.575466] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:50.788 20:08:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.788 20:08:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:50.788 20:08:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.788 20:08:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.788 [2024-11-18 20:08:02.671648] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:50.788 20:08:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.788 20:08:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:50.788 20:08:02 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.788 20:08:02 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.788 20:08:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.788 ************************************ 00:06:50.788 START TEST scheduler_create_thread 00:06:50.788 ************************************ 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.788 2 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.788 3 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.788 4 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.788 5 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.788 6 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.788 7 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.788 8 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.788 9 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.788 10 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.788 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:50.789 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:50.789 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.789 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.789 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.789 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:50.789 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.789 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.046 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.046 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:51.046 20:08:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:51.046 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.046 20:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.305 20:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.305 00:06:51.305 real 0m0.590s 00:06:51.305 user 0m0.011s 00:06:51.305 sys 0m0.005s 00:06:51.305 20:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.305 20:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.305 ************************************ 00:06:51.305 END TEST scheduler_create_thread 00:06:51.305 ************************************ 00:06:51.563 20:08:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:51.564 20:08:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 105053 00:06:51.564 20:08:03 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 105053 ']' 00:06:51.564 20:08:03 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 105053 00:06:51.564 20:08:03 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:51.564 20:08:03 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.564 20:08:03 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105053 00:06:51.564 20:08:03 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:51.564 20:08:03 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:51.564 20:08:03 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105053' 00:06:51.564 killing process with pid 105053 00:06:51.564 20:08:03 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 105053 00:06:51.564 20:08:03 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 105053 00:06:51.821 [2024-11-18 20:08:03.771863] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:52.081 00:06:52.081 real 0m1.812s 00:06:52.081 user 0m2.468s 00:06:52.081 sys 0m0.357s 00:06:52.081 20:08:03 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.081 20:08:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:52.081 ************************************ 00:06:52.081 END TEST event_scheduler 00:06:52.081 ************************************ 00:06:52.081 20:08:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:52.081 20:08:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:52.081 20:08:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.081 20:08:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.081 20:08:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.081 ************************************ 00:06:52.081 START TEST app_repeat 00:06:52.081 ************************************ 00:06:52.081 20:08:04 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:52.081 20:08:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.081 20:08:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.081 20:08:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:52.081 20:08:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.081 20:08:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:52.081 20:08:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:52.081 20:08:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:52.081 20:08:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=105247 00:06:52.081 20:08:04 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:52.081 20:08:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.081 20:08:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 105247' 00:06:52.081 Process app_repeat pid: 105247 00:06:52.081 20:08:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:52.081 20:08:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:52.081 spdk_app_start Round 0 00:06:52.081 20:08:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105247 /var/tmp/spdk-nbd.sock 00:06:52.081 20:08:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105247 ']' 00:06:52.081 20:08:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.081 20:08:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.081 20:08:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.081 20:08:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.081 20:08:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.081 [2024-11-18 20:08:04.042005] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:52.081 [2024-11-18 20:08:04.042065] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105247 ] 00:06:52.340 [2024-11-18 20:08:04.109120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.340 [2024-11-18 20:08:04.156969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.340 [2024-11-18 20:08:04.156972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.340 20:08:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.340 20:08:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:52.341 20:08:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.599 Malloc0 00:06:52.599 20:08:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.858 Malloc1 00:06:53.117 20:08:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.117 20:08:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:53.376 /dev/nbd0 00:06:53.376 20:08:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:53.376 20:08:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:53.376 20:08:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:53.376 20:08:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:53.376 20:08:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.376 20:08:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.376 20:08:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:53.376 20:08:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:53.376 20:08:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.376 20:08:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.376 20:08:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.376 1+0 records in 00:06:53.376 1+0 records out 00:06:53.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227002 s, 18.0 MB/s 00:06:53.376 20:08:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.376 20:08:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:53.376 20:08:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.376 20:08:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.376 20:08:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:53.376 20:08:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.376 20:08:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.376 20:08:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:53.635 /dev/nbd1 00:06:53.635 20:08:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:53.635 20:08:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:53.635 20:08:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:53.635 20:08:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:53.635 20:08:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.635 20:08:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.635 20:08:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:53.635 20:08:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:53.635 20:08:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.635 20:08:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.635 20:08:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.635 1+0 records in 00:06:53.635 1+0 records out 00:06:53.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278904 s, 14.7 MB/s 00:06:53.635 20:08:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.635 20:08:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:53.635 20:08:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.635 20:08:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.635 20:08:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:53.635 20:08:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.635 20:08:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.635 20:08:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.635 20:08:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.635 20:08:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.894 { 00:06:53.894 "nbd_device": "/dev/nbd0", 00:06:53.894 "bdev_name": "Malloc0" 00:06:53.894 }, 00:06:53.894 { 00:06:53.894 "nbd_device": "/dev/nbd1", 00:06:53.894 "bdev_name": "Malloc1" 00:06:53.894 } 00:06:53.894 ]' 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.894 { 00:06:53.894 "nbd_device": "/dev/nbd0", 00:06:53.894 "bdev_name": "Malloc0" 00:06:53.894 }, 00:06:53.894 { 00:06:53.894 "nbd_device": "/dev/nbd1", 00:06:53.894 "bdev_name": "Malloc1" 00:06:53.894 } 00:06:53.894 ]' 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.894 /dev/nbd1' 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.894 /dev/nbd1' 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:53.894 256+0 records in 00:06:53.894 256+0 records out 00:06:53.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496511 s, 211 MB/s 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:53.894 256+0 records in 00:06:53.894 256+0 records out 00:06:53.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201243 s, 52.1 MB/s 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.894 20:08:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:54.152 256+0 records in 00:06:54.152 256+0 records out 00:06:54.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230798 s, 45.4 MB/s 00:06:54.152 20:08:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:54.152 20:08:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.152 20:08:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.153 20:08:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.411 20:08:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.411 20:08:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.411 20:08:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.411 20:08:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.411 20:08:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.411 20:08:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.411 20:08:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.411 20:08:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.411 20:08:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.411 20:08:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.669 20:08:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.669 20:08:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.669 20:08:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.669 20:08:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.669 20:08:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.669 20:08:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.669 20:08:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.669 20:08:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.669 20:08:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.669 20:08:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.669 20:08:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.927 20:08:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.927 20:08:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.927 20:08:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.927 20:08:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.927 20:08:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.927 20:08:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.927 20:08:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:54.927 20:08:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.927 20:08:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.927 20:08:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:54.927 20:08:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:54.927 20:08:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:54.927 20:08:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:55.186 20:08:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.445 [2024-11-18 20:08:07.309986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.445 [2024-11-18 20:08:07.353306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.445 [2024-11-18 20:08:07.353306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.445 [2024-11-18 20:08:07.404519] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.445 [2024-11-18 20:08:07.404578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:58.729 20:08:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:58.729 20:08:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:58.729 spdk_app_start Round 1 00:06:58.729 20:08:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105247 /var/tmp/spdk-nbd.sock 00:06:58.729 20:08:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105247 ']' 00:06:58.729 20:08:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.729 20:08:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.729 20:08:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.729 20:08:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.729 20:08:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.729 20:08:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.729 20:08:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:58.729 20:08:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.729 Malloc0 00:06:58.729 20:08:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.988 Malloc1 00:06:58.988 20:08:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.988 20:08:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:59.554 /dev/nbd0 00:06:59.554 20:08:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.554 20:08:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.554 20:08:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:59.554 20:08:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:59.554 20:08:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.554 20:08:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.554 20:08:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:59.554 20:08:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:59.554 20:08:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.554 20:08:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.554 20:08:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.554 1+0 records in 00:06:59.554 1+0 records out 00:06:59.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200489 s, 20.4 MB/s 00:06:59.555 20:08:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.555 20:08:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:59.555 20:08:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.555 20:08:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.555 20:08:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:59.555 20:08:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.555 20:08:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.555 20:08:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:59.813 /dev/nbd1 00:06:59.813 20:08:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.813 20:08:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.813 20:08:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:59.813 20:08:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:59.813 20:08:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.813 20:08:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.813 20:08:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:59.813 20:08:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:59.813 20:08:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.813 20:08:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.813 20:08:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.813 1+0 records in 00:06:59.813 1+0 records out 00:06:59.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000151881 s, 27.0 MB/s 00:06:59.813 20:08:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.813 20:08:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:59.813 20:08:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.813 20:08:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.813 20:08:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:59.813 20:08:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.813 20:08:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.813 20:08:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.813 20:08:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.813 20:08:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.072 { 00:07:00.072 "nbd_device": "/dev/nbd0", 00:07:00.072 "bdev_name": "Malloc0" 00:07:00.072 }, 00:07:00.072 { 00:07:00.072 "nbd_device": "/dev/nbd1", 00:07:00.072 "bdev_name": "Malloc1" 00:07:00.072 } 00:07:00.072 ]' 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.072 { 00:07:00.072 "nbd_device": "/dev/nbd0", 00:07:00.072 "bdev_name": "Malloc0" 00:07:00.072 }, 00:07:00.072 { 00:07:00.072 "nbd_device": "/dev/nbd1", 00:07:00.072 "bdev_name": "Malloc1" 00:07:00.072 } 00:07:00.072 ]' 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.072 /dev/nbd1' 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.072 /dev/nbd1' 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:00.072 256+0 records in 00:07:00.072 256+0 records out 00:07:00.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513727 s, 204 MB/s 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.072 256+0 records in 00:07:00.072 256+0 records out 00:07:00.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204111 s, 51.4 MB/s 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:00.072 256+0 records in 00:07:00.072 256+0 records out 00:07:00.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222189 s, 47.2 MB/s 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.072 20:08:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.330 20:08:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.330 20:08:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.330 20:08:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.330 20:08:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.330 20:08:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.331 20:08:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.331 20:08:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.331 20:08:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.331 20:08:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.331 20:08:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.589 20:08:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.589 20:08:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.589 20:08:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.589 20:08:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.589 20:08:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.589 20:08:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.589 20:08:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.589 20:08:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.589 20:08:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.589 20:08:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.589 20:08:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.156 20:08:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.156 20:08:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.156 20:08:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.156 20:08:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.156 20:08:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.156 20:08:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.156 20:08:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:01.156 20:08:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.156 20:08:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.156 20:08:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:01.156 20:08:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:01.156 20:08:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:01.156 20:08:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:01.415 20:08:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:01.415 [2024-11-18 20:08:13.382856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.674 [2024-11-18 20:08:13.428345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.674 [2024-11-18 20:08:13.428345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.674 [2024-11-18 20:08:13.482822] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:01.674 [2024-11-18 20:08:13.482887] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:04.205 20:08:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:04.205 20:08:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:04.205 spdk_app_start Round 2 00:07:04.205 20:08:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105247 /var/tmp/spdk-nbd.sock 00:07:04.205 20:08:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105247 ']' 00:07:04.205 20:08:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.205 20:08:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.205 20:08:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.205 20:08:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.205 20:08:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.463 20:08:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.463 20:08:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:04.463 20:08:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.722 Malloc0 00:07:04.980 20:08:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:05.238 Malloc1 00:07:05.239 20:08:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.239 20:08:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:05.497 /dev/nbd0 00:07:05.497 20:08:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.497 20:08:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.497 20:08:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:05.497 20:08:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:05.497 20:08:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.497 20:08:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.497 20:08:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:05.497 20:08:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:05.497 20:08:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.497 20:08:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.497 20:08:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.497 1+0 records in 00:07:05.497 1+0 records out 00:07:05.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196247 s, 20.9 MB/s 00:07:05.497 20:08:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.497 20:08:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:05.497 20:08:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.497 20:08:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.497 20:08:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:05.497 20:08:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.497 20:08:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.497 20:08:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.756 /dev/nbd1 00:07:05.756 20:08:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.756 20:08:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.756 20:08:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:05.756 20:08:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:05.756 20:08:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.756 20:08:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.756 20:08:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:05.756 20:08:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:05.756 20:08:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.756 20:08:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.756 20:08:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.756 1+0 records in 00:07:05.756 1+0 records out 00:07:05.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211241 s, 19.4 MB/s 00:07:05.756 20:08:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.756 20:08:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:05.756 20:08:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.756 20:08:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.756 20:08:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:05.756 20:08:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.756 20:08:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.756 20:08:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.756 20:08:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.756 20:08:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.015 20:08:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:06.015 { 00:07:06.015 "nbd_device": "/dev/nbd0", 00:07:06.015 "bdev_name": "Malloc0" 00:07:06.015 }, 00:07:06.015 { 00:07:06.015 "nbd_device": "/dev/nbd1", 00:07:06.015 "bdev_name": "Malloc1" 00:07:06.015 } 00:07:06.015 ]' 00:07:06.015 20:08:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:06.015 { 00:07:06.015 "nbd_device": "/dev/nbd0", 00:07:06.015 "bdev_name": "Malloc0" 00:07:06.015 }, 00:07:06.015 { 00:07:06.015 "nbd_device": "/dev/nbd1", 00:07:06.015 "bdev_name": "Malloc1" 00:07:06.015 } 00:07:06.015 ]' 00:07:06.015 20:08:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:06.274 /dev/nbd1' 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:06.274 /dev/nbd1' 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:06.274 256+0 records in 00:07:06.274 256+0 records out 00:07:06.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505815 s, 207 MB/s 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:06.274 256+0 records in 00:07:06.274 256+0 records out 00:07:06.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204926 s, 51.2 MB/s 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:06.274 256+0 records in 00:07:06.274 256+0 records out 00:07:06.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226511 s, 46.3 MB/s 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.274 20:08:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.533 20:08:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.533 20:08:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.533 20:08:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.533 20:08:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.533 20:08:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.533 20:08:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.533 20:08:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.533 20:08:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.533 20:08:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.533 20:08:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.791 20:08:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.791 20:08:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.791 20:08:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.791 20:08:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.791 20:08:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.791 20:08:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.791 20:08:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.791 20:08:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.791 20:08:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.791 20:08:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.791 20:08:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.049 20:08:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.049 20:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.049 20:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.049 20:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.049 20:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.049 20:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.049 20:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:07.049 20:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.049 20:08:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.049 20:08:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:07.049 20:08:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:07.049 20:08:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:07.049 20:08:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:07.307 20:08:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:07.566 [2024-11-18 20:08:19.476680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.566 [2024-11-18 20:08:19.525111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.566 [2024-11-18 20:08:19.525116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.825 [2024-11-18 20:08:19.584053] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.825 [2024-11-18 20:08:19.584125] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.356 20:08:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 105247 /var/tmp/spdk-nbd.sock 00:07:10.356 20:08:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105247 ']' 00:07:10.356 20:08:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.356 20:08:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.356 20:08:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.356 20:08:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.356 20:08:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.615 20:08:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.615 20:08:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:10.615 20:08:22 event.app_repeat -- event/event.sh@39 -- # killprocess 105247 00:07:10.615 20:08:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 105247 ']' 00:07:10.615 20:08:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 105247 00:07:10.615 20:08:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:10.615 20:08:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.615 20:08:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105247 00:07:10.874 20:08:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.874 20:08:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.874 20:08:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105247' 00:07:10.874 killing process with pid 105247 00:07:10.874 20:08:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 105247 00:07:10.874 20:08:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 105247 00:07:10.874 spdk_app_start is called in Round 0. 00:07:10.874 Shutdown signal received, stop current app iteration 00:07:10.874 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 reinitialization... 00:07:10.874 spdk_app_start is called in Round 1. 00:07:10.874 Shutdown signal received, stop current app iteration 00:07:10.874 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 reinitialization... 00:07:10.874 spdk_app_start is called in Round 2. 00:07:10.874 Shutdown signal received, stop current app iteration 00:07:10.874 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 reinitialization... 00:07:10.874 spdk_app_start is called in Round 3. 00:07:10.874 Shutdown signal received, stop current app iteration 00:07:10.874 20:08:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:10.874 20:08:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:10.874 00:07:10.874 real 0m18.775s 00:07:10.874 user 0m41.600s 00:07:10.874 sys 0m3.240s 00:07:10.874 20:08:22 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.874 20:08:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.874 ************************************ 00:07:10.874 END TEST app_repeat 00:07:10.874 ************************************ 00:07:10.874 20:08:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:10.874 20:08:22 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:10.874 20:08:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.874 20:08:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.874 20:08:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.874 ************************************ 00:07:10.874 START TEST cpu_locks 00:07:10.874 ************************************ 00:07:10.874 20:08:22 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:10.874 * Looking for test storage... 00:07:11.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:11.132 20:08:22 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.133 20:08:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.133 20:08:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.133 20:08:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.133 20:08:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:11.133 20:08:22 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.133 20:08:22 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.133 --rc genhtml_branch_coverage=1 00:07:11.133 --rc genhtml_function_coverage=1 00:07:11.133 --rc genhtml_legend=1 00:07:11.133 --rc geninfo_all_blocks=1 00:07:11.133 --rc geninfo_unexecuted_blocks=1 00:07:11.133 00:07:11.133 ' 00:07:11.133 20:08:22 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.133 --rc genhtml_branch_coverage=1 00:07:11.133 --rc genhtml_function_coverage=1 00:07:11.133 --rc genhtml_legend=1 00:07:11.133 --rc geninfo_all_blocks=1 00:07:11.133 --rc geninfo_unexecuted_blocks=1 00:07:11.133 00:07:11.133 ' 00:07:11.133 20:08:22 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.133 --rc genhtml_branch_coverage=1 00:07:11.133 --rc genhtml_function_coverage=1 00:07:11.133 --rc genhtml_legend=1 00:07:11.133 --rc geninfo_all_blocks=1 00:07:11.133 --rc geninfo_unexecuted_blocks=1 00:07:11.133 00:07:11.133 ' 00:07:11.133 20:08:22 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.133 --rc genhtml_branch_coverage=1 00:07:11.133 --rc genhtml_function_coverage=1 00:07:11.133 --rc genhtml_legend=1 00:07:11.133 --rc geninfo_all_blocks=1 00:07:11.133 --rc geninfo_unexecuted_blocks=1 00:07:11.133 00:07:11.133 ' 00:07:11.133 20:08:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:11.133 20:08:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:11.133 20:08:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:11.133 20:08:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:11.133 20:08:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.133 20:08:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.133 20:08:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.133 ************************************ 00:07:11.133 START TEST default_locks 00:07:11.133 ************************************ 00:07:11.133 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:11.133 20:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=107736 00:07:11.133 20:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.133 20:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 107736 00:07:11.133 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 107736 ']' 00:07:11.133 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.133 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.133 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.133 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.133 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.133 [2024-11-18 20:08:23.055525] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:11.133 [2024-11-18 20:08:23.055606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107736 ] 00:07:11.133 [2024-11-18 20:08:23.119945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.392 [2024-11-18 20:08:23.165871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.657 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.657 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:11.657 20:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 107736 00:07:11.657 20:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 107736 00:07:11.657 20:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.657 lslocks: write error 00:07:11.657 20:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 107736 00:07:11.657 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 107736 ']' 00:07:11.657 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 107736 00:07:11.657 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:11.657 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.657 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107736 00:07:11.657 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.657 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.658 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107736' 00:07:11.658 killing process with pid 107736 00:07:11.658 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 107736 00:07:11.658 20:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 107736 00:07:12.224 20:08:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 107736 00:07:12.224 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:12.224 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 107736 00:07:12.224 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:12.224 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.224 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:12.224 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.224 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 107736 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 107736 ']' 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (107736) - No such process 00:07:12.225 ERROR: process (pid: 107736) is no longer running 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:12.225 00:07:12.225 real 0m1.036s 00:07:12.225 user 0m1.008s 00:07:12.225 sys 0m0.486s 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.225 20:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.225 ************************************ 00:07:12.225 END TEST default_locks 00:07:12.225 ************************************ 00:07:12.225 20:08:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:12.225 20:08:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.225 20:08:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.225 20:08:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.225 ************************************ 00:07:12.225 START TEST default_locks_via_rpc 00:07:12.225 ************************************ 00:07:12.225 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:12.225 20:08:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=107901 00:07:12.225 20:08:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.225 20:08:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 107901 00:07:12.225 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 107901 ']' 00:07:12.225 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.225 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.225 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.225 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.225 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.225 [2024-11-18 20:08:24.144395] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:12.225 [2024-11-18 20:08:24.144488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107901 ] 00:07:12.225 [2024-11-18 20:08:24.211359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.483 [2024-11-18 20:08:24.255574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 107901 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 107901 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 107901 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 107901 ']' 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 107901 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.741 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107901 00:07:12.999 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.999 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.999 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107901' 00:07:12.999 killing process with pid 107901 00:07:12.999 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 107901 00:07:12.999 20:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 107901 00:07:13.258 00:07:13.258 real 0m1.034s 00:07:13.258 user 0m1.014s 00:07:13.258 sys 0m0.479s 00:07:13.258 20:08:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.259 20:08:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.259 ************************************ 00:07:13.259 END TEST default_locks_via_rpc 00:07:13.259 ************************************ 00:07:13.259 20:08:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:13.259 20:08:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.259 20:08:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.259 20:08:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.259 ************************************ 00:07:13.259 START TEST non_locking_app_on_locked_coremask 00:07:13.259 ************************************ 00:07:13.259 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:13.259 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=108061 00:07:13.259 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.259 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 108061 /var/tmp/spdk.sock 00:07:13.259 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108061 ']' 00:07:13.259 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.259 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.259 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.259 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.259 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.259 [2024-11-18 20:08:25.235154] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:13.259 [2024-11-18 20:08:25.235251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108061 ] 00:07:13.517 [2024-11-18 20:08:25.303019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.518 [2024-11-18 20:08:25.351216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.777 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.777 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:13.777 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=108070 00:07:13.777 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:13.777 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 108070 /var/tmp/spdk2.sock 00:07:13.777 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108070 ']' 00:07:13.777 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.777 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.777 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.777 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.777 20:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.777 [2024-11-18 20:08:25.661190] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:13.777 [2024-11-18 20:08:25.661275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108070 ] 00:07:13.777 [2024-11-18 20:08:25.760175] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.777 [2024-11-18 20:08:25.760219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.036 [2024-11-18 20:08:25.854658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.602 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.602 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:14.602 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 108061 00:07:14.602 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108061 00:07:14.602 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.861 lslocks: write error 00:07:14.861 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 108061 00:07:14.861 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108061 ']' 00:07:14.861 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108061 00:07:14.861 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:14.861 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.861 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108061 00:07:14.861 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.861 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.861 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108061' 00:07:14.861 killing process with pid 108061 00:07:14.861 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108061 00:07:14.862 20:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108061 00:07:15.798 20:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 108070 00:07:15.798 20:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108070 ']' 00:07:15.798 20:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108070 00:07:15.798 20:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:15.798 20:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.798 20:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108070 00:07:15.798 20:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.798 20:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.798 20:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108070' 00:07:15.798 killing process with pid 108070 00:07:15.798 20:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108070 00:07:15.798 20:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108070 00:07:16.057 00:07:16.057 real 0m2.692s 00:07:16.057 user 0m2.711s 00:07:16.057 sys 0m0.965s 00:07:16.057 20:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.057 20:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.057 ************************************ 00:07:16.057 END TEST non_locking_app_on_locked_coremask 00:07:16.057 ************************************ 00:07:16.057 20:08:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:16.057 20:08:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.057 20:08:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.057 20:08:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.057 ************************************ 00:07:16.057 START TEST locking_app_on_unlocked_coremask 00:07:16.057 ************************************ 00:07:16.057 20:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:16.057 20:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=108368 00:07:16.057 20:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:16.057 20:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 108368 /var/tmp/spdk.sock 00:07:16.057 20:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108368 ']' 00:07:16.057 20:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.057 20:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.057 20:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.057 20:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.057 20:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.057 [2024-11-18 20:08:27.975616] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:16.057 [2024-11-18 20:08:27.975717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108368 ] 00:07:16.057 [2024-11-18 20:08:28.043458] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:16.057 [2024-11-18 20:08:28.043487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.316 [2024-11-18 20:08:28.089771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.575 20:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.575 20:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:16.575 20:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=108496 00:07:16.575 20:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:16.575 20:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 108496 /var/tmp/spdk2.sock 00:07:16.575 20:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108496 ']' 00:07:16.575 20:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.575 20:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.575 20:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.575 20:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.575 20:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.575 [2024-11-18 20:08:28.382712] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:16.575 [2024-11-18 20:08:28.382792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108496 ] 00:07:16.575 [2024-11-18 20:08:28.487147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.575 [2024-11-18 20:08:28.577684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.142 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.142 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:17.142 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 108496 00:07:17.142 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108496 00:07:17.142 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.709 lslocks: write error 00:07:17.709 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 108368 00:07:17.709 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108368 ']' 00:07:17.709 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 108368 00:07:17.709 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:17.709 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.709 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108368 00:07:17.709 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.709 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.710 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108368' 00:07:17.710 killing process with pid 108368 00:07:17.710 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 108368 00:07:17.710 20:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 108368 00:07:18.644 20:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 108496 00:07:18.644 20:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108496 ']' 00:07:18.644 20:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 108496 00:07:18.644 20:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:18.644 20:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.644 20:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108496 00:07:18.644 20:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.644 20:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.644 20:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108496' 00:07:18.644 killing process with pid 108496 00:07:18.644 20:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 108496 00:07:18.644 20:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 108496 00:07:18.903 00:07:18.903 real 0m2.784s 00:07:18.903 user 0m2.831s 00:07:18.903 sys 0m0.969s 00:07:18.903 20:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.903 20:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.903 ************************************ 00:07:18.903 END TEST locking_app_on_unlocked_coremask 00:07:18.903 ************************************ 00:07:18.903 20:08:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:18.903 20:08:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.903 20:08:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.903 20:08:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.903 ************************************ 00:07:18.903 START TEST locking_app_on_locked_coremask 00:07:18.903 ************************************ 00:07:18.903 20:08:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:18.903 20:08:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=108797 00:07:18.903 20:08:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.903 20:08:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 108797 /var/tmp/spdk.sock 00:07:18.903 20:08:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108797 ']' 00:07:18.903 20:08:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.903 20:08:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.903 20:08:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.903 20:08:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.903 20:08:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.903 [2024-11-18 20:08:30.810667] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:18.903 [2024-11-18 20:08:30.810749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108797 ] 00:07:18.903 [2024-11-18 20:08:30.876112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.161 [2024-11-18 20:08:30.920825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.161 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.161 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:19.161 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=108803 00:07:19.161 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:19.161 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 108803 /var/tmp/spdk2.sock 00:07:19.161 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:19.422 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 108803 /var/tmp/spdk2.sock 00:07:19.422 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:19.422 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.422 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:19.422 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.422 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 108803 /var/tmp/spdk2.sock 00:07:19.422 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108803 ']' 00:07:19.422 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.422 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.422 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.422 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.422 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.422 [2024-11-18 20:08:31.222990] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:19.422 [2024-11-18 20:08:31.223078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108803 ] 00:07:19.422 [2024-11-18 20:08:31.319588] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 108797 has claimed it. 00:07:19.422 [2024-11-18 20:08:31.323679] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (108803) - No such process 00:07:19.989 ERROR: process (pid: 108803) is no longer running 00:07:19.989 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.989 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:19.989 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:19.989 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.989 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.989 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.989 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 108797 00:07:19.989 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108797 00:07:19.989 20:08:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.554 lslocks: write error 00:07:20.555 20:08:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 108797 00:07:20.555 20:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108797 ']' 00:07:20.555 20:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108797 00:07:20.555 20:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:20.555 20:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.555 20:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108797 00:07:20.555 20:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.555 20:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.555 20:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108797' 00:07:20.555 killing process with pid 108797 00:07:20.555 20:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108797 00:07:20.555 20:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108797 00:07:20.813 00:07:20.813 real 0m1.963s 00:07:20.813 user 0m2.215s 00:07:20.813 sys 0m0.617s 00:07:20.813 20:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.813 20:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.813 ************************************ 00:07:20.813 END TEST locking_app_on_locked_coremask 00:07:20.813 ************************************ 00:07:20.813 20:08:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:20.813 20:08:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.813 20:08:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.813 20:08:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.813 ************************************ 00:07:20.813 START TEST locking_overlapped_coremask 00:07:20.813 ************************************ 00:07:20.813 20:08:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:20.813 20:08:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=109092 00:07:20.813 20:08:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:20.813 20:08:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 109092 /var/tmp/spdk.sock 00:07:20.813 20:08:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 109092 ']' 00:07:20.813 20:08:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.813 20:08:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.813 20:08:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.813 20:08:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.813 20:08:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.072 [2024-11-18 20:08:32.827227] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:21.072 [2024-11-18 20:08:32.827303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109092 ] 00:07:21.072 [2024-11-18 20:08:32.894230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.072 [2024-11-18 20:08:32.945068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.072 [2024-11-18 20:08:32.945134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.072 [2024-11-18 20:08:32.945137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=109103 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 109103 /var/tmp/spdk2.sock 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 109103 /var/tmp/spdk2.sock 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 109103 /var/tmp/spdk2.sock 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 109103 ']' 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.330 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.330 [2024-11-18 20:08:33.262357] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:21.330 [2024-11-18 20:08:33.262461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109103 ] 00:07:21.589 [2024-11-18 20:08:33.368733] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109092 has claimed it. 00:07:21.589 [2024-11-18 20:08:33.368791] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:22.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (109103) - No such process 00:07:22.157 ERROR: process (pid: 109103) is no longer running 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 109092 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 109092 ']' 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 109092 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.157 20:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109092 00:07:22.157 20:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.157 20:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.157 20:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109092' 00:07:22.157 killing process with pid 109092 00:07:22.157 20:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 109092 00:07:22.157 20:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 109092 00:07:22.418 00:07:22.418 real 0m1.620s 00:07:22.418 user 0m4.561s 00:07:22.418 sys 0m0.476s 00:07:22.418 20:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.418 20:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.418 ************************************ 00:07:22.418 END TEST locking_overlapped_coremask 00:07:22.418 ************************************ 00:07:22.418 20:08:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:22.418 20:08:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.418 20:08:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.418 20:08:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.678 ************************************ 00:07:22.678 START TEST locking_overlapped_coremask_via_rpc 00:07:22.678 ************************************ 00:07:22.678 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:22.678 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=109271 00:07:22.678 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:22.678 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 109271 /var/tmp/spdk.sock 00:07:22.678 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109271 ']' 00:07:22.678 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.678 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.678 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.678 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.678 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.678 [2024-11-18 20:08:34.496929] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:22.678 [2024-11-18 20:08:34.497038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109271 ] 00:07:22.678 [2024-11-18 20:08:34.559672] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.678 [2024-11-18 20:08:34.559705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.678 [2024-11-18 20:08:34.603774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.678 [2024-11-18 20:08:34.603836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.678 [2024-11-18 20:08:34.603840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.937 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.937 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:22.937 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=109368 00:07:22.937 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 109368 /var/tmp/spdk2.sock 00:07:22.937 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:22.937 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109368 ']' 00:07:22.937 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.937 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.937 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.937 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.937 20:08:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.937 [2024-11-18 20:08:34.919045] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:22.937 [2024-11-18 20:08:34.919141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109368 ] 00:07:23.195 [2024-11-18 20:08:35.023259] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:23.195 [2024-11-18 20:08:35.023294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:23.195 [2024-11-18 20:08:35.119355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.195 [2024-11-18 20:08:35.122728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:23.195 [2024-11-18 20:08:35.122730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.131 [2024-11-18 20:08:35.899738] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109271 has claimed it. 00:07:24.131 request: 00:07:24.131 { 00:07:24.131 "method": "framework_enable_cpumask_locks", 00:07:24.131 "req_id": 1 00:07:24.131 } 00:07:24.131 Got JSON-RPC error response 00:07:24.131 response: 00:07:24.131 { 00:07:24.131 "code": -32603, 00:07:24.131 "message": "Failed to claim CPU core: 2" 00:07:24.131 } 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 109271 /var/tmp/spdk.sock 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109271 ']' 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.131 20:08:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.390 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.390 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:24.390 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 109368 /var/tmp/spdk2.sock 00:07:24.390 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109368 ']' 00:07:24.390 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.390 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.390 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.390 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.390 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.649 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.649 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:24.649 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:24.649 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:24.649 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:24.649 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:24.649 00:07:24.649 real 0m2.016s 00:07:24.649 user 0m1.117s 00:07:24.649 sys 0m0.182s 00:07:24.649 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.649 20:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.649 ************************************ 00:07:24.649 END TEST locking_overlapped_coremask_via_rpc 00:07:24.649 ************************************ 00:07:24.649 20:08:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:24.649 20:08:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 109271 ]] 00:07:24.649 20:08:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 109271 00:07:24.649 20:08:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109271 ']' 00:07:24.649 20:08:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109271 00:07:24.649 20:08:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:24.649 20:08:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.649 20:08:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109271 00:07:24.649 20:08:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.649 20:08:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.649 20:08:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109271' 00:07:24.649 killing process with pid 109271 00:07:24.649 20:08:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 109271 00:07:24.649 20:08:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 109271 00:07:24.908 20:08:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 109368 ]] 00:07:24.908 20:08:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 109368 00:07:24.908 20:08:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109368 ']' 00:07:24.908 20:08:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109368 00:07:25.167 20:08:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:25.167 20:08:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.167 20:08:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109368 00:07:25.167 20:08:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:25.167 20:08:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:25.167 20:08:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109368' 00:07:25.167 killing process with pid 109368 00:07:25.167 20:08:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 109368 00:07:25.167 20:08:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 109368 00:07:25.426 20:08:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:25.426 20:08:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:25.426 20:08:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 109271 ]] 00:07:25.426 20:08:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 109271 00:07:25.426 20:08:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109271 ']' 00:07:25.426 20:08:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109271 00:07:25.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (109271) - No such process 00:07:25.426 20:08:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 109271 is not found' 00:07:25.426 Process with pid 109271 is not found 00:07:25.426 20:08:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 109368 ]] 00:07:25.426 20:08:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 109368 00:07:25.426 20:08:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109368 ']' 00:07:25.426 20:08:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109368 00:07:25.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (109368) - No such process 00:07:25.426 20:08:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 109368 is not found' 00:07:25.426 Process with pid 109368 is not found 00:07:25.426 20:08:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:25.426 00:07:25.426 real 0m14.500s 00:07:25.426 user 0m26.868s 00:07:25.426 sys 0m5.120s 00:07:25.426 20:08:37 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.426 20:08:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.426 ************************************ 00:07:25.426 END TEST cpu_locks 00:07:25.426 ************************************ 00:07:25.426 00:07:25.426 real 0m39.027s 00:07:25.426 user 1m17.450s 00:07:25.426 sys 0m9.154s 00:07:25.426 20:08:37 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.426 20:08:37 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.426 ************************************ 00:07:25.426 END TEST event 00:07:25.426 ************************************ 00:07:25.426 20:08:37 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:25.426 20:08:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.426 20:08:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.426 20:08:37 -- common/autotest_common.sh@10 -- # set +x 00:07:25.426 ************************************ 00:07:25.426 START TEST thread 00:07:25.426 ************************************ 00:07:25.426 20:08:37 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:25.685 * Looking for test storage... 00:07:25.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:25.685 20:08:37 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:25.685 20:08:37 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:25.685 20:08:37 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:25.685 20:08:37 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:25.685 20:08:37 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.685 20:08:37 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.685 20:08:37 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.685 20:08:37 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.685 20:08:37 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.685 20:08:37 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.685 20:08:37 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.685 20:08:37 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.685 20:08:37 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.685 20:08:37 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.685 20:08:37 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.685 20:08:37 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:25.685 20:08:37 thread -- scripts/common.sh@345 -- # : 1 00:07:25.685 20:08:37 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.685 20:08:37 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.685 20:08:37 thread -- scripts/common.sh@365 -- # decimal 1 00:07:25.685 20:08:37 thread -- scripts/common.sh@353 -- # local d=1 00:07:25.685 20:08:37 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.685 20:08:37 thread -- scripts/common.sh@355 -- # echo 1 00:07:25.685 20:08:37 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.685 20:08:37 thread -- scripts/common.sh@366 -- # decimal 2 00:07:25.685 20:08:37 thread -- scripts/common.sh@353 -- # local d=2 00:07:25.685 20:08:37 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.685 20:08:37 thread -- scripts/common.sh@355 -- # echo 2 00:07:25.685 20:08:37 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.685 20:08:37 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.685 20:08:37 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.685 20:08:37 thread -- scripts/common.sh@368 -- # return 0 00:07:25.685 20:08:37 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.685 20:08:37 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.685 --rc genhtml_branch_coverage=1 00:07:25.685 --rc genhtml_function_coverage=1 00:07:25.685 --rc genhtml_legend=1 00:07:25.685 --rc geninfo_all_blocks=1 00:07:25.685 --rc geninfo_unexecuted_blocks=1 00:07:25.685 00:07:25.685 ' 00:07:25.685 20:08:37 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.686 --rc genhtml_branch_coverage=1 00:07:25.686 --rc genhtml_function_coverage=1 00:07:25.686 --rc genhtml_legend=1 00:07:25.686 --rc geninfo_all_blocks=1 00:07:25.686 --rc geninfo_unexecuted_blocks=1 00:07:25.686 00:07:25.686 ' 00:07:25.686 20:08:37 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.686 --rc genhtml_branch_coverage=1 00:07:25.686 --rc genhtml_function_coverage=1 00:07:25.686 --rc genhtml_legend=1 00:07:25.686 --rc geninfo_all_blocks=1 00:07:25.686 --rc geninfo_unexecuted_blocks=1 00:07:25.686 00:07:25.686 ' 00:07:25.686 20:08:37 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.686 --rc genhtml_branch_coverage=1 00:07:25.686 --rc genhtml_function_coverage=1 00:07:25.686 --rc genhtml_legend=1 00:07:25.686 --rc geninfo_all_blocks=1 00:07:25.686 --rc geninfo_unexecuted_blocks=1 00:07:25.686 00:07:25.686 ' 00:07:25.686 20:08:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:25.686 20:08:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:25.686 20:08:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.686 20:08:37 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.686 ************************************ 00:07:25.686 START TEST thread_poller_perf 00:07:25.686 ************************************ 00:07:25.686 20:08:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:25.686 [2024-11-18 20:08:37.597474] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:25.686 [2024-11-18 20:08:37.597536] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109775 ] 00:07:25.686 [2024-11-18 20:08:37.662860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.944 [2024-11-18 20:08:37.708361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.944 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:26.880 [2024-11-18T19:08:38.888Z] ====================================== 00:07:26.880 [2024-11-18T19:08:38.888Z] busy:2712467691 (cyc) 00:07:26.880 [2024-11-18T19:08:38.888Z] total_run_count: 366000 00:07:26.880 [2024-11-18T19:08:38.888Z] tsc_hz: 2700000000 (cyc) 00:07:26.880 [2024-11-18T19:08:38.888Z] ====================================== 00:07:26.880 [2024-11-18T19:08:38.888Z] poller_cost: 7411 (cyc), 2744 (nsec) 00:07:26.880 00:07:26.880 real 0m1.175s 00:07:26.880 user 0m1.106s 00:07:26.880 sys 0m0.065s 00:07:26.880 20:08:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.880 20:08:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.880 ************************************ 00:07:26.880 END TEST thread_poller_perf 00:07:26.880 ************************************ 00:07:26.880 20:08:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:26.880 20:08:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:26.880 20:08:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.880 20:08:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.880 ************************************ 00:07:26.880 START TEST thread_poller_perf 00:07:26.880 ************************************ 00:07:26.880 20:08:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:26.880 [2024-11-18 20:08:38.825458] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:26.880 [2024-11-18 20:08:38.825526] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109932 ] 00:07:27.140 [2024-11-18 20:08:38.891767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.140 [2024-11-18 20:08:38.936596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.140 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:28.073 [2024-11-18T19:08:40.081Z] ====================================== 00:07:28.073 [2024-11-18T19:08:40.081Z] busy:2702192985 (cyc) 00:07:28.073 [2024-11-18T19:08:40.081Z] total_run_count: 4848000 00:07:28.073 [2024-11-18T19:08:40.081Z] tsc_hz: 2700000000 (cyc) 00:07:28.073 [2024-11-18T19:08:40.081Z] ====================================== 00:07:28.073 [2024-11-18T19:08:40.081Z] poller_cost: 557 (cyc), 206 (nsec) 00:07:28.073 00:07:28.073 real 0m1.168s 00:07:28.073 user 0m1.102s 00:07:28.073 sys 0m0.062s 00:07:28.073 20:08:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.073 20:08:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:28.073 ************************************ 00:07:28.073 END TEST thread_poller_perf 00:07:28.073 ************************************ 00:07:28.073 20:08:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:28.073 00:07:28.073 real 0m2.593s 00:07:28.073 user 0m2.352s 00:07:28.073 sys 0m0.247s 00:07:28.073 20:08:40 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.073 20:08:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.073 ************************************ 00:07:28.073 END TEST thread 00:07:28.073 ************************************ 00:07:28.073 20:08:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:28.073 20:08:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:28.073 20:08:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.073 20:08:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.073 20:08:40 -- common/autotest_common.sh@10 -- # set +x 00:07:28.073 ************************************ 00:07:28.073 START TEST app_cmdline 00:07:28.073 ************************************ 00:07:28.073 20:08:40 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:28.333 * Looking for test storage... 00:07:28.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.333 20:08:40 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.333 --rc genhtml_branch_coverage=1 00:07:28.333 --rc genhtml_function_coverage=1 00:07:28.333 --rc genhtml_legend=1 00:07:28.333 --rc geninfo_all_blocks=1 00:07:28.333 --rc geninfo_unexecuted_blocks=1 00:07:28.333 00:07:28.333 ' 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.333 --rc genhtml_branch_coverage=1 00:07:28.333 --rc genhtml_function_coverage=1 00:07:28.333 --rc genhtml_legend=1 00:07:28.333 --rc geninfo_all_blocks=1 00:07:28.333 --rc geninfo_unexecuted_blocks=1 00:07:28.333 00:07:28.333 ' 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.333 --rc genhtml_branch_coverage=1 00:07:28.333 --rc genhtml_function_coverage=1 00:07:28.333 --rc genhtml_legend=1 00:07:28.333 --rc geninfo_all_blocks=1 00:07:28.333 --rc geninfo_unexecuted_blocks=1 00:07:28.333 00:07:28.333 ' 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.333 --rc genhtml_branch_coverage=1 00:07:28.333 --rc genhtml_function_coverage=1 00:07:28.333 --rc genhtml_legend=1 00:07:28.333 --rc geninfo_all_blocks=1 00:07:28.333 --rc geninfo_unexecuted_blocks=1 00:07:28.333 00:07:28.333 ' 00:07:28.333 20:08:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:28.333 20:08:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=110132 00:07:28.333 20:08:40 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:28.333 20:08:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 110132 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 110132 ']' 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.333 20:08:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.333 [2024-11-18 20:08:40.256487] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:28.333 [2024-11-18 20:08:40.256571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110132 ] 00:07:28.333 [2024-11-18 20:08:40.324080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.593 [2024-11-18 20:08:40.374039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.851 20:08:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.851 20:08:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:28.851 20:08:40 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:29.109 { 00:07:29.109 "version": "SPDK v25.01-pre git sha1 d47eb51c9", 00:07:29.109 "fields": { 00:07:29.109 "major": 25, 00:07:29.109 "minor": 1, 00:07:29.109 "patch": 0, 00:07:29.109 "suffix": "-pre", 00:07:29.109 "commit": "d47eb51c9" 00:07:29.110 } 00:07:29.110 } 00:07:29.110 20:08:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:29.110 20:08:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:29.110 20:08:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:29.110 20:08:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:29.110 20:08:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:29.110 20:08:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.110 20:08:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.110 20:08:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:29.110 20:08:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:29.110 20:08:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.110 20:08:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:29.110 20:08:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:29.110 20:08:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.110 20:08:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:29.110 20:08:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.110 20:08:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.110 20:08:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.110 20:08:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.110 20:08:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.110 20:08:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.110 20:08:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.110 20:08:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.110 20:08:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:29.110 20:08:40 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.368 request: 00:07:29.369 { 00:07:29.369 "method": "env_dpdk_get_mem_stats", 00:07:29.369 "req_id": 1 00:07:29.369 } 00:07:29.369 Got JSON-RPC error response 00:07:29.369 response: 00:07:29.369 { 00:07:29.369 "code": -32601, 00:07:29.369 "message": "Method not found" 00:07:29.369 } 00:07:29.369 20:08:41 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:29.369 20:08:41 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.369 20:08:41 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.369 20:08:41 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.369 20:08:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 110132 00:07:29.369 20:08:41 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 110132 ']' 00:07:29.369 20:08:41 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 110132 00:07:29.369 20:08:41 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:29.369 20:08:41 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.369 20:08:41 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110132 00:07:29.369 20:08:41 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.369 20:08:41 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.369 20:08:41 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110132' 00:07:29.369 killing process with pid 110132 00:07:29.369 20:08:41 app_cmdline -- common/autotest_common.sh@973 -- # kill 110132 00:07:29.369 20:08:41 app_cmdline -- common/autotest_common.sh@978 -- # wait 110132 00:07:29.937 00:07:29.937 real 0m1.601s 00:07:29.937 user 0m1.993s 00:07:29.937 sys 0m0.501s 00:07:29.937 20:08:41 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.937 20:08:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.937 ************************************ 00:07:29.937 END TEST app_cmdline 00:07:29.937 ************************************ 00:07:29.937 20:08:41 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:29.937 20:08:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.937 20:08:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.937 20:08:41 -- common/autotest_common.sh@10 -- # set +x 00:07:29.937 ************************************ 00:07:29.937 START TEST version 00:07:29.937 ************************************ 00:07:29.937 20:08:41 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:29.937 * Looking for test storage... 00:07:29.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:29.937 20:08:41 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.937 20:08:41 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.937 20:08:41 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.937 20:08:41 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.937 20:08:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.937 20:08:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.937 20:08:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.937 20:08:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.937 20:08:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.937 20:08:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.937 20:08:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.937 20:08:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.937 20:08:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.937 20:08:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.937 20:08:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.937 20:08:41 version -- scripts/common.sh@344 -- # case "$op" in 00:07:29.937 20:08:41 version -- scripts/common.sh@345 -- # : 1 00:07:29.937 20:08:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.937 20:08:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.937 20:08:41 version -- scripts/common.sh@365 -- # decimal 1 00:07:29.937 20:08:41 version -- scripts/common.sh@353 -- # local d=1 00:07:29.937 20:08:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.937 20:08:41 version -- scripts/common.sh@355 -- # echo 1 00:07:29.938 20:08:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.938 20:08:41 version -- scripts/common.sh@366 -- # decimal 2 00:07:29.938 20:08:41 version -- scripts/common.sh@353 -- # local d=2 00:07:29.938 20:08:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.938 20:08:41 version -- scripts/common.sh@355 -- # echo 2 00:07:29.938 20:08:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.938 20:08:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.938 20:08:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.938 20:08:41 version -- scripts/common.sh@368 -- # return 0 00:07:29.938 20:08:41 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.938 20:08:41 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.938 --rc genhtml_branch_coverage=1 00:07:29.938 --rc genhtml_function_coverage=1 00:07:29.938 --rc genhtml_legend=1 00:07:29.938 --rc geninfo_all_blocks=1 00:07:29.938 --rc geninfo_unexecuted_blocks=1 00:07:29.938 00:07:29.938 ' 00:07:29.938 20:08:41 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.938 --rc genhtml_branch_coverage=1 00:07:29.938 --rc genhtml_function_coverage=1 00:07:29.938 --rc genhtml_legend=1 00:07:29.938 --rc geninfo_all_blocks=1 00:07:29.938 --rc geninfo_unexecuted_blocks=1 00:07:29.938 00:07:29.938 ' 00:07:29.938 20:08:41 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.938 --rc genhtml_branch_coverage=1 00:07:29.938 --rc genhtml_function_coverage=1 00:07:29.938 --rc genhtml_legend=1 00:07:29.938 --rc geninfo_all_blocks=1 00:07:29.938 --rc geninfo_unexecuted_blocks=1 00:07:29.938 00:07:29.938 ' 00:07:29.938 20:08:41 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.938 --rc genhtml_branch_coverage=1 00:07:29.938 --rc genhtml_function_coverage=1 00:07:29.938 --rc genhtml_legend=1 00:07:29.938 --rc geninfo_all_blocks=1 00:07:29.938 --rc geninfo_unexecuted_blocks=1 00:07:29.938 00:07:29.938 ' 00:07:29.938 20:08:41 version -- app/version.sh@17 -- # get_header_version major 00:07:29.938 20:08:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.938 20:08:41 version -- app/version.sh@14 -- # cut -f2 00:07:29.938 20:08:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.938 20:08:41 version -- app/version.sh@17 -- # major=25 00:07:29.938 20:08:41 version -- app/version.sh@18 -- # get_header_version minor 00:07:29.938 20:08:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.938 20:08:41 version -- app/version.sh@14 -- # cut -f2 00:07:29.938 20:08:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.938 20:08:41 version -- app/version.sh@18 -- # minor=1 00:07:29.938 20:08:41 version -- app/version.sh@19 -- # get_header_version patch 00:07:29.938 20:08:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.938 20:08:41 version -- app/version.sh@14 -- # cut -f2 00:07:29.938 20:08:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.938 20:08:41 version -- app/version.sh@19 -- # patch=0 00:07:29.938 20:08:41 version -- app/version.sh@20 -- # get_header_version suffix 00:07:29.938 20:08:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.938 20:08:41 version -- app/version.sh@14 -- # cut -f2 00:07:29.938 20:08:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.938 20:08:41 version -- app/version.sh@20 -- # suffix=-pre 00:07:29.938 20:08:41 version -- app/version.sh@22 -- # version=25.1 00:07:29.938 20:08:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:29.938 20:08:41 version -- app/version.sh@28 -- # version=25.1rc0 00:07:29.938 20:08:41 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:29.938 20:08:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:29.938 20:08:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:29.938 20:08:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:29.938 00:07:29.938 real 0m0.195s 00:07:29.938 user 0m0.123s 00:07:29.938 sys 0m0.097s 00:07:29.938 20:08:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.938 20:08:41 version -- common/autotest_common.sh@10 -- # set +x 00:07:29.938 ************************************ 00:07:29.938 END TEST version 00:07:29.938 ************************************ 00:07:29.938 20:08:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:29.938 20:08:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:29.938 20:08:41 -- spdk/autotest.sh@194 -- # uname -s 00:07:29.938 20:08:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:29.938 20:08:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:29.938 20:08:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:29.938 20:08:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:29.938 20:08:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:29.938 20:08:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:29.938 20:08:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:29.938 20:08:41 -- common/autotest_common.sh@10 -- # set +x 00:07:29.938 20:08:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:29.938 20:08:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:29.938 20:08:41 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:29.938 20:08:41 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:29.938 20:08:41 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:30.198 20:08:41 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:30.198 20:08:41 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:30.198 20:08:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.198 20:08:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.198 20:08:41 -- common/autotest_common.sh@10 -- # set +x 00:07:30.198 ************************************ 00:07:30.198 START TEST nvmf_tcp 00:07:30.198 ************************************ 00:07:30.198 20:08:41 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:30.198 * Looking for test storage... 00:07:30.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:30.198 20:08:42 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.198 20:08:42 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.198 20:08:42 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.198 20:08:42 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.198 20:08:42 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:30.198 20:08:42 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.198 20:08:42 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.198 --rc genhtml_branch_coverage=1 00:07:30.198 --rc genhtml_function_coverage=1 00:07:30.198 --rc genhtml_legend=1 00:07:30.198 --rc geninfo_all_blocks=1 00:07:30.198 --rc geninfo_unexecuted_blocks=1 00:07:30.198 00:07:30.198 ' 00:07:30.198 20:08:42 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.198 --rc genhtml_branch_coverage=1 00:07:30.198 --rc genhtml_function_coverage=1 00:07:30.198 --rc genhtml_legend=1 00:07:30.198 --rc geninfo_all_blocks=1 00:07:30.198 --rc geninfo_unexecuted_blocks=1 00:07:30.198 00:07:30.198 ' 00:07:30.198 20:08:42 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.198 --rc genhtml_branch_coverage=1 00:07:30.198 --rc genhtml_function_coverage=1 00:07:30.198 --rc genhtml_legend=1 00:07:30.198 --rc geninfo_all_blocks=1 00:07:30.198 --rc geninfo_unexecuted_blocks=1 00:07:30.198 00:07:30.198 ' 00:07:30.198 20:08:42 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.198 --rc genhtml_branch_coverage=1 00:07:30.198 --rc genhtml_function_coverage=1 00:07:30.198 --rc genhtml_legend=1 00:07:30.198 --rc geninfo_all_blocks=1 00:07:30.198 --rc geninfo_unexecuted_blocks=1 00:07:30.198 00:07:30.198 ' 00:07:30.198 20:08:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:30.198 20:08:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:30.198 20:08:42 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:30.198 20:08:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.198 20:08:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.198 20:08:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.198 ************************************ 00:07:30.198 START TEST nvmf_target_core 00:07:30.198 ************************************ 00:07:30.198 20:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:30.198 * Looking for test storage... 00:07:30.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:30.198 20:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.198 20:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.198 20:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.460 --rc genhtml_branch_coverage=1 00:07:30.460 --rc genhtml_function_coverage=1 00:07:30.460 --rc genhtml_legend=1 00:07:30.460 --rc geninfo_all_blocks=1 00:07:30.460 --rc geninfo_unexecuted_blocks=1 00:07:30.460 00:07:30.460 ' 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.460 --rc genhtml_branch_coverage=1 00:07:30.460 --rc genhtml_function_coverage=1 00:07:30.460 --rc genhtml_legend=1 00:07:30.460 --rc geninfo_all_blocks=1 00:07:30.460 --rc geninfo_unexecuted_blocks=1 00:07:30.460 00:07:30.460 ' 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.460 --rc genhtml_branch_coverage=1 00:07:30.460 --rc genhtml_function_coverage=1 00:07:30.460 --rc genhtml_legend=1 00:07:30.460 --rc geninfo_all_blocks=1 00:07:30.460 --rc geninfo_unexecuted_blocks=1 00:07:30.460 00:07:30.460 ' 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.460 --rc genhtml_branch_coverage=1 00:07:30.460 --rc genhtml_function_coverage=1 00:07:30.460 --rc genhtml_legend=1 00:07:30.460 --rc geninfo_all_blocks=1 00:07:30.460 --rc geninfo_unexecuted_blocks=1 00:07:30.460 00:07:30.460 ' 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.460 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.461 ************************************ 00:07:30.461 START TEST nvmf_abort 00:07:30.461 ************************************ 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:30.461 * Looking for test storage... 00:07:30.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.461 --rc genhtml_branch_coverage=1 00:07:30.461 --rc genhtml_function_coverage=1 00:07:30.461 --rc genhtml_legend=1 00:07:30.461 --rc geninfo_all_blocks=1 00:07:30.461 --rc geninfo_unexecuted_blocks=1 00:07:30.461 00:07:30.461 ' 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.461 --rc genhtml_branch_coverage=1 00:07:30.461 --rc genhtml_function_coverage=1 00:07:30.461 --rc genhtml_legend=1 00:07:30.461 --rc geninfo_all_blocks=1 00:07:30.461 --rc geninfo_unexecuted_blocks=1 00:07:30.461 00:07:30.461 ' 00:07:30.461 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.461 --rc genhtml_branch_coverage=1 00:07:30.462 --rc genhtml_function_coverage=1 00:07:30.462 --rc genhtml_legend=1 00:07:30.462 --rc geninfo_all_blocks=1 00:07:30.462 --rc geninfo_unexecuted_blocks=1 00:07:30.462 00:07:30.462 ' 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.462 --rc genhtml_branch_coverage=1 00:07:30.462 --rc genhtml_function_coverage=1 00:07:30.462 --rc genhtml_legend=1 00:07:30.462 --rc geninfo_all_blocks=1 00:07:30.462 --rc geninfo_unexecuted_blocks=1 00:07:30.462 00:07:30.462 ' 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.462 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.722 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:30.722 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:30.722 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:30.722 20:08:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:32.635 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:32.635 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:32.635 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:32.635 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:32.635 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.636 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.636 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:32.636 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:32.636 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.636 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:32.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:07:32.895 00:07:32.895 --- 10.0.0.2 ping statistics --- 00:07:32.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.895 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:07:32.895 00:07:32.895 --- 10.0.0.1 ping statistics --- 00:07:32.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.895 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=112227 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 112227 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 112227 ']' 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.895 20:08:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.895 [2024-11-18 20:08:44.852650] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:32.896 [2024-11-18 20:08:44.852736] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.154 [2024-11-18 20:08:44.926295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.155 [2024-11-18 20:08:44.975776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.155 [2024-11-18 20:08:44.975827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.155 [2024-11-18 20:08:44.975840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.155 [2024-11-18 20:08:44.975852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.155 [2024-11-18 20:08:44.975861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.155 [2024-11-18 20:08:44.977207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.155 [2024-11-18 20:08:44.977236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.155 [2024-11-18 20:08:44.977239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.155 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.155 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:33.155 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:33.155 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:33.155 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.155 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.155 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:33.155 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.155 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.155 [2024-11-18 20:08:45.128027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.155 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.155 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:33.155 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.155 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.413 Malloc0 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.413 Delay0 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.413 [2024-11-18 20:08:45.195273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.413 20:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:33.413 [2024-11-18 20:08:45.311548] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:35.946 Initializing NVMe Controllers 00:07:35.946 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:35.946 controller IO queue size 128 less than required 00:07:35.946 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:35.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:35.946 Initialization complete. Launching workers. 00:07:35.946 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28635 00:07:35.946 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28700, failed to submit 62 00:07:35.946 success 28639, unsuccessful 61, failed 0 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:35.946 rmmod nvme_tcp 00:07:35.946 rmmod nvme_fabrics 00:07:35.946 rmmod nvme_keyring 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 112227 ']' 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 112227 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 112227 ']' 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 112227 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112227 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112227' 00:07:35.946 killing process with pid 112227 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 112227 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 112227 00:07:35.946 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:35.947 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:35.947 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:35.947 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:35.947 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:35.947 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:35.947 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:35.947 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:35.947 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:35.947 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.947 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.947 20:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.860 20:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:37.860 00:07:37.860 real 0m7.561s 00:07:37.860 user 0m11.061s 00:07:37.860 sys 0m2.507s 00:07:37.860 20:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.860 20:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:37.860 ************************************ 00:07:37.860 END TEST nvmf_abort 00:07:37.860 ************************************ 00:07:38.120 20:08:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:38.120 20:08:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.120 20:08:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.120 20:08:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.120 ************************************ 00:07:38.120 START TEST nvmf_ns_hotplug_stress 00:07:38.120 ************************************ 00:07:38.120 20:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:38.120 * Looking for test storage... 00:07:38.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.120 20:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:38.120 20:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:38.120 20:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.120 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:38.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.121 --rc genhtml_branch_coverage=1 00:07:38.121 --rc genhtml_function_coverage=1 00:07:38.121 --rc genhtml_legend=1 00:07:38.121 --rc geninfo_all_blocks=1 00:07:38.121 --rc geninfo_unexecuted_blocks=1 00:07:38.121 00:07:38.121 ' 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:38.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.121 --rc genhtml_branch_coverage=1 00:07:38.121 --rc genhtml_function_coverage=1 00:07:38.121 --rc genhtml_legend=1 00:07:38.121 --rc geninfo_all_blocks=1 00:07:38.121 --rc geninfo_unexecuted_blocks=1 00:07:38.121 00:07:38.121 ' 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:38.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.121 --rc genhtml_branch_coverage=1 00:07:38.121 --rc genhtml_function_coverage=1 00:07:38.121 --rc genhtml_legend=1 00:07:38.121 --rc geninfo_all_blocks=1 00:07:38.121 --rc geninfo_unexecuted_blocks=1 00:07:38.121 00:07:38.121 ' 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:38.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.121 --rc genhtml_branch_coverage=1 00:07:38.121 --rc genhtml_function_coverage=1 00:07:38.121 --rc genhtml_legend=1 00:07:38.121 --rc geninfo_all_blocks=1 00:07:38.121 --rc geninfo_unexecuted_blocks=1 00:07:38.121 00:07:38.121 ' 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:38.121 20:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:40.664 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:40.664 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:40.664 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:40.665 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:40.665 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:40.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:07:40.665 00:07:40.665 --- 10.0.0.2 ping statistics --- 00:07:40.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.665 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:40.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:07:40.665 00:07:40.665 --- 10.0.0.1 ping statistics --- 00:07:40.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.665 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=114587 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 114587 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 114587 ']' 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.665 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:40.665 [2024-11-18 20:08:52.552282] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:40.665 [2024-11-18 20:08:52.552373] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.665 [2024-11-18 20:08:52.628441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:40.925 [2024-11-18 20:08:52.672779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.925 [2024-11-18 20:08:52.672829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.925 [2024-11-18 20:08:52.672849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.925 [2024-11-18 20:08:52.672860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.925 [2024-11-18 20:08:52.672870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.925 [2024-11-18 20:08:52.674539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.925 [2024-11-18 20:08:52.674595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.925 [2024-11-18 20:08:52.674598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.925 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.925 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:40.925 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:40.925 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:40.925 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:40.925 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.925 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:40.925 20:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:41.184 [2024-11-18 20:08:53.055200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.184 20:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:41.442 20:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.700 [2024-11-18 20:08:53.586122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.700 20:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:41.958 20:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:42.216 Malloc0 00:07:42.216 20:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:42.474 Delay0 00:07:42.474 20:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.732 20:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:42.990 NULL1 00:07:42.990 20:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:43.556 20:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=115009 00:07:43.556 20:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:43.556 20:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:43.556 20:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.491 Read completed with error (sct=0, sc=11) 00:07:44.491 20:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.008 20:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:45.008 20:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:45.008 true 00:07:45.266 20:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:45.266 20:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.832 20:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.090 20:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:46.090 20:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:46.348 true 00:07:46.348 20:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:46.348 20:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.606 20:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.172 20:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:47.172 20:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:47.172 true 00:07:47.172 20:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:47.172 20:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.428 20:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.995 20:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:47.995 20:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:47.995 true 00:07:47.995 20:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:47.995 20:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.370 20:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.370 20:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:49.370 20:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:49.628 true 00:07:49.628 20:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:49.628 20:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.886 20:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.144 20:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:50.144 20:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:50.401 true 00:07:50.401 20:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:50.401 20:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.660 20:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.918 20:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:50.918 20:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:51.176 true 00:07:51.434 20:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:51.434 20:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.368 20:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.626 20:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:52.626 20:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:52.885 true 00:07:52.885 20:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:52.885 20:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.143 20:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.402 20:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:53.402 20:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:53.660 true 00:07:53.660 20:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:53.660 20:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.918 20:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.177 20:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:54.177 20:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:54.435 true 00:07:54.435 20:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:54.435 20:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.368 20:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.625 20:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:55.625 20:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:55.884 true 00:07:55.884 20:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:55.884 20:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.450 20:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.450 20:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:56.450 20:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:57.016 true 00:07:57.016 20:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:57.016 20:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.016 20:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.582 20:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:57.582 20:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:57.582 true 00:07:57.582 20:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:57.582 20:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.954 20:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.954 20:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:58.954 20:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:59.212 true 00:07:59.212 20:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:59.212 20:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.469 20:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.727 20:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:59.727 20:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:59.985 true 00:07:59.985 20:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:07:59.985 20:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.918 20:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.177 20:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:01.177 20:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:01.436 true 00:08:01.436 20:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:08:01.436 20:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.694 20:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.952 20:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:01.952 20:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:02.210 true 00:08:02.210 20:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:08:02.210 20:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.145 20:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.402 20:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:03.402 20:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:03.660 true 00:08:03.660 20:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:08:03.660 20:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.918 20:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.176 20:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:04.176 20:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:04.433 true 00:08:04.434 20:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:08:04.434 20:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.367 20:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.624 20:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:05.624 20:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:05.881 true 00:08:05.881 20:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:08:05.881 20:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.139 20:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.397 20:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:06.397 20:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:06.654 true 00:08:06.654 20:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:08:06.654 20:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.589 20:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.589 20:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:07.589 20:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:07.847 true 00:08:07.847 20:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:08:07.847 20:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.105 20:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.363 20:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:08.363 20:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:08.622 true 00:08:08.622 20:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:08:08.622 20:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.881 20:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.139 20:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:09.139 20:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:09.397 true 00:08:09.397 20:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:08:09.397 20:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.773 20:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.773 20:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:10.773 20:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:11.031 true 00:08:11.031 20:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:08:11.031 20:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.289 20:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.547 20:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:11.547 20:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:11.805 true 00:08:11.805 20:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:08:11.805 20:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.063 20:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.321 20:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:12.321 20:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:12.580 true 00:08:12.580 20:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:08:12.580 20:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.954 Initializing NVMe Controllers 00:08:13.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:13.954 Controller IO queue size 128, less than required. 00:08:13.954 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:13.954 Controller IO queue size 128, less than required. 00:08:13.954 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:13.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:13.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:13.954 Initialization complete. Launching workers. 00:08:13.954 ======================================================== 00:08:13.954 Latency(us) 00:08:13.954 Device Information : IOPS MiB/s Average min max 00:08:13.954 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 661.40 0.32 86449.17 3358.70 1012738.29 00:08:13.954 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9412.50 4.60 13598.99 3371.89 533728.52 00:08:13.954 ======================================================== 00:08:13.954 Total : 10073.90 4.92 18381.96 3358.70 1012738.29 00:08:13.954 00:08:13.954 20:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.954 20:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:13.954 20:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:14.212 true 00:08:14.212 20:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115009 00:08:14.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (115009) - No such process 00:08:14.212 20:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 115009 00:08:14.212 20:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.470 20:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.727 20:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:14.727 20:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:14.727 20:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:14.727 20:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.727 20:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:14.985 null0 00:08:14.985 20:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.985 20:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.985 20:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:15.243 null1 00:08:15.243 20:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.243 20:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.243 20:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:15.502 null2 00:08:15.502 20:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.502 20:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.502 20:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:15.760 null3 00:08:15.760 20:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.760 20:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.760 20:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:16.018 null4 00:08:16.018 20:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:16.018 20:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.018 20:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:16.276 null5 00:08:16.277 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:16.277 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.277 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:16.534 null6 00:08:16.792 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:16.792 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.792 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:17.051 null7 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 119689 119690 119692 119694 119696 119698 119700 119702 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.051 20:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.310 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.310 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.310 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.310 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.310 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.310 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.310 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.310 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.568 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.568 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.568 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.568 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.568 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.568 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.568 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.568 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.568 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.568 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.568 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.568 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.568 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.568 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.569 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.569 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.569 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.569 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.569 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.569 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.569 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.569 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.569 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.569 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.826 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.826 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.826 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.826 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.826 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.826 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.826 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.826 20:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.084 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.084 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.084 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.085 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.343 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.343 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.343 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.343 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.343 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.343 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.343 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.343 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.910 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.911 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.911 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.911 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.911 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.169 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.169 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.169 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.169 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.169 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.169 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.169 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.169 20:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.428 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.428 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.428 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.428 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.428 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.428 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.428 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.428 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.428 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.428 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.428 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.429 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.429 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.429 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.429 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.429 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.429 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.429 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.429 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.429 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.429 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.429 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.429 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.429 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.687 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.687 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.687 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.687 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.688 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.688 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.688 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.688 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.946 20:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.205 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.205 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.205 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.205 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.205 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.205 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.205 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.205 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.463 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.031 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.031 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.031 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.031 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.031 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.031 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.031 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.031 20:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.031 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.031 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.031 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.290 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.549 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.549 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.549 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.549 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.549 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.549 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.549 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.549 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.808 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.067 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.067 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.067 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.067 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.067 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.067 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.067 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.067 20:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.326 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.585 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.585 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.585 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.585 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.585 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.585 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.585 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.585 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.845 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.845 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.845 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.845 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:23.104 rmmod nvme_tcp 00:08:23.104 rmmod nvme_fabrics 00:08:23.104 rmmod nvme_keyring 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 114587 ']' 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 114587 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 114587 ']' 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 114587 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114587 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114587' 00:08:23.104 killing process with pid 114587 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 114587 00:08:23.104 20:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 114587 00:08:23.366 20:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:23.366 20:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:23.366 20:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:23.366 20:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:23.366 20:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:23.366 20:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:23.366 20:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:23.366 20:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:23.366 20:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:23.366 20:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.366 20:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.366 20:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.277 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:25.277 00:08:25.277 real 0m47.331s 00:08:25.277 user 3m39.452s 00:08:25.277 sys 0m16.197s 00:08:25.277 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.277 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.277 ************************************ 00:08:25.277 END TEST nvmf_ns_hotplug_stress 00:08:25.277 ************************************ 00:08:25.277 20:09:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:25.277 20:09:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:25.277 20:09:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.277 20:09:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:25.536 ************************************ 00:08:25.536 START TEST nvmf_delete_subsystem 00:08:25.536 ************************************ 00:08:25.536 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:25.536 * Looking for test storage... 00:08:25.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.536 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:25.536 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:25.536 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:25.536 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:25.536 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.536 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.536 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.536 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:25.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.537 --rc genhtml_branch_coverage=1 00:08:25.537 --rc genhtml_function_coverage=1 00:08:25.537 --rc genhtml_legend=1 00:08:25.537 --rc geninfo_all_blocks=1 00:08:25.537 --rc geninfo_unexecuted_blocks=1 00:08:25.537 00:08:25.537 ' 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:25.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.537 --rc genhtml_branch_coverage=1 00:08:25.537 --rc genhtml_function_coverage=1 00:08:25.537 --rc genhtml_legend=1 00:08:25.537 --rc geninfo_all_blocks=1 00:08:25.537 --rc geninfo_unexecuted_blocks=1 00:08:25.537 00:08:25.537 ' 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:25.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.537 --rc genhtml_branch_coverage=1 00:08:25.537 --rc genhtml_function_coverage=1 00:08:25.537 --rc genhtml_legend=1 00:08:25.537 --rc geninfo_all_blocks=1 00:08:25.537 --rc geninfo_unexecuted_blocks=1 00:08:25.537 00:08:25.537 ' 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:25.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.537 --rc genhtml_branch_coverage=1 00:08:25.537 --rc genhtml_function_coverage=1 00:08:25.537 --rc genhtml_legend=1 00:08:25.537 --rc geninfo_all_blocks=1 00:08:25.537 --rc geninfo_unexecuted_blocks=1 00:08:25.537 00:08:25.537 ' 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:25.537 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:25.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:25.538 20:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:28.076 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:28.076 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:28.076 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:28.076 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.076 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:28.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:08:28.077 00:08:28.077 --- 10.0.0.2 ping statistics --- 00:08:28.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.077 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:08:28.077 00:08:28.077 --- 10.0.0.1 ping statistics --- 00:08:28.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.077 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=122594 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 122594 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 122594 ']' 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.077 20:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.077 [2024-11-18 20:09:39.883106] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:28.077 [2024-11-18 20:09:39.883198] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.077 [2024-11-18 20:09:39.954373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:28.077 [2024-11-18 20:09:39.998039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.077 [2024-11-18 20:09:39.998097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.077 [2024-11-18 20:09:39.998125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.077 [2024-11-18 20:09:39.998136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.077 [2024-11-18 20:09:39.998146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.077 [2024-11-18 20:09:39.999559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.077 [2024-11-18 20:09:39.999565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.336 [2024-11-18 20:09:40.147786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.336 [2024-11-18 20:09:40.164019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.336 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:28.337 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.337 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.337 NULL1 00:08:28.337 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.337 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:28.337 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.337 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.337 Delay0 00:08:28.337 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.337 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.337 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.337 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.337 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.337 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=122616 00:08:28.337 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:28.337 20:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:28.337 [2024-11-18 20:09:40.248886] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:30.238 20:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.238 20:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.238 20:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.496 Read completed with error (sct=0, sc=8) 00:08:30.496 Write completed with error (sct=0, sc=8) 00:08:30.496 Read completed with error (sct=0, sc=8) 00:08:30.496 starting I/O failed: -6 00:08:30.496 Read completed with error (sct=0, sc=8) 00:08:30.496 Read completed with error (sct=0, sc=8) 00:08:30.496 Read completed with error (sct=0, sc=8) 00:08:30.496 Read completed with error (sct=0, sc=8) 00:08:30.496 starting I/O failed: -6 00:08:30.496 Read completed with error (sct=0, sc=8) 00:08:30.496 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 [2024-11-18 20:09:42.379870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f08c400d4b0 is same with the state(6) to be set 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 starting I/O failed: -6 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 [2024-11-18 20:09:42.381304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff5e70 is same with the state(6) to be set 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.497 Read completed with error (sct=0, sc=8) 00:08:30.497 Write completed with error (sct=0, sc=8) 00:08:30.498 Write completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Write completed with error (sct=0, sc=8) 00:08:30.498 Write completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Write completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Write completed with error (sct=0, sc=8) 00:08:30.498 Write completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Write completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:30.498 Write completed with error (sct=0, sc=8) 00:08:30.498 Read completed with error (sct=0, sc=8) 00:08:31.433 [2024-11-18 20:09:43.343739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10035b0 is same with the state(6) to be set 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 [2024-11-18 20:09:43.382717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff5b40 is same with the state(6) to be set 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 [2024-11-18 20:09:43.384421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f08c400d7e0 is same with the state(6) to be set 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 [2024-11-18 20:09:43.384781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f08c4000c40 is same with the state(6) to be set 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Write completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 Read completed with error (sct=0, sc=8) 00:08:31.433 [2024-11-18 20:09:43.384944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f08c400d020 is same with the state(6) to be set 00:08:31.433 Initializing NVMe Controllers 00:08:31.433 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:31.433 Controller IO queue size 128, less than required. 00:08:31.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:31.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:31.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:31.434 Initialization complete. Launching workers. 00:08:31.434 ======================================================== 00:08:31.434 Latency(us) 00:08:31.434 Device Information : IOPS MiB/s Average min max 00:08:31.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 153.75 0.08 906585.61 331.02 2001187.04 00:08:31.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.15 0.08 970603.81 2377.85 1013206.23 00:08:31.434 ======================================================== 00:08:31.434 Total : 320.90 0.16 939930.48 331.02 2001187.04 00:08:31.434 00:08:31.434 [2024-11-18 20:09:43.385777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10035b0 (9): Bad file descriptor 00:08:31.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:31.434 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.434 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:31.434 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122616 00:08:31.434 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122616 00:08:32.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (122616) - No such process 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 122616 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 122616 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 122616 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.000 [2024-11-18 20:09:43.907764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=123029 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123029 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:32.000 20:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.000 [2024-11-18 20:09:43.972028] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:32.566 20:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.566 20:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123029 00:08:32.566 20:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:33.133 20:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:33.133 20:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123029 00:08:33.133 20:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:33.698 20:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:33.699 20:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123029 00:08:33.699 20:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:33.956 20:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:33.957 20:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123029 00:08:33.957 20:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:34.522 20:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:34.522 20:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123029 00:08:34.522 20:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:35.088 20:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:35.088 20:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123029 00:08:35.088 20:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:35.347 Initializing NVMe Controllers 00:08:35.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:35.347 Controller IO queue size 128, less than required. 00:08:35.347 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:35.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:35.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:35.347 Initialization complete. Launching workers. 00:08:35.347 ======================================================== 00:08:35.347 Latency(us) 00:08:35.347 Device Information : IOPS MiB/s Average min max 00:08:35.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004436.16 1000199.43 1041988.62 00:08:35.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004297.36 1000217.45 1013226.08 00:08:35.347 ======================================================== 00:08:35.347 Total : 256.00 0.12 1004366.76 1000199.43 1041988.62 00:08:35.347 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123029 00:08:35.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (123029) - No such process 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 123029 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.604 rmmod nvme_tcp 00:08:35.604 rmmod nvme_fabrics 00:08:35.604 rmmod nvme_keyring 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 122594 ']' 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 122594 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 122594 ']' 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 122594 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122594 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122594' 00:08:35.604 killing process with pid 122594 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 122594 00:08:35.604 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 122594 00:08:35.862 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.862 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.862 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.863 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:35.863 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:35.863 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.863 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.863 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.863 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.863 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.863 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.863 20:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:38.407 00:08:38.407 real 0m12.501s 00:08:38.407 user 0m27.775s 00:08:38.407 sys 0m3.217s 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.407 ************************************ 00:08:38.407 END TEST nvmf_delete_subsystem 00:08:38.407 ************************************ 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.407 ************************************ 00:08:38.407 START TEST nvmf_host_management 00:08:38.407 ************************************ 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:38.407 * Looking for test storage... 00:08:38.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.407 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.407 --rc genhtml_branch_coverage=1 00:08:38.408 --rc genhtml_function_coverage=1 00:08:38.408 --rc genhtml_legend=1 00:08:38.408 --rc geninfo_all_blocks=1 00:08:38.408 --rc geninfo_unexecuted_blocks=1 00:08:38.408 00:08:38.408 ' 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.408 --rc genhtml_branch_coverage=1 00:08:38.408 --rc genhtml_function_coverage=1 00:08:38.408 --rc genhtml_legend=1 00:08:38.408 --rc geninfo_all_blocks=1 00:08:38.408 --rc geninfo_unexecuted_blocks=1 00:08:38.408 00:08:38.408 ' 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.408 --rc genhtml_branch_coverage=1 00:08:38.408 --rc genhtml_function_coverage=1 00:08:38.408 --rc genhtml_legend=1 00:08:38.408 --rc geninfo_all_blocks=1 00:08:38.408 --rc geninfo_unexecuted_blocks=1 00:08:38.408 00:08:38.408 ' 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.408 --rc genhtml_branch_coverage=1 00:08:38.408 --rc genhtml_function_coverage=1 00:08:38.408 --rc genhtml_legend=1 00:08:38.408 --rc geninfo_all_blocks=1 00:08:38.408 --rc geninfo_unexecuted_blocks=1 00:08:38.408 00:08:38.408 ' 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.408 20:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.408 20:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:38.408 20:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:38.408 20:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:38.408 20:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:40.319 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:40.319 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:40.319 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:40.320 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:40.320 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:40.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:08:40.320 00:08:40.320 --- 10.0.0.2 ping statistics --- 00:08:40.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.320 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:08:40.320 00:08:40.320 --- 10.0.0.1 ping statistics --- 00:08:40.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.320 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=125502 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 125502 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 125502 ']' 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.320 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.580 [2024-11-18 20:09:52.336350] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:40.580 [2024-11-18 20:09:52.336442] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.580 [2024-11-18 20:09:52.409017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.580 [2024-11-18 20:09:52.453371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.580 [2024-11-18 20:09:52.453428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.580 [2024-11-18 20:09:52.453456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.580 [2024-11-18 20:09:52.453467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.580 [2024-11-18 20:09:52.453477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.580 [2024-11-18 20:09:52.455103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.580 [2024-11-18 20:09:52.455230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.580 [2024-11-18 20:09:52.455340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:40.580 [2024-11-18 20:09:52.455348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.580 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.580 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:40.580 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.580 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.580 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.838 [2024-11-18 20:09:52.597599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.838 Malloc0 00:08:40.838 [2024-11-18 20:09:52.675470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.838 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=125549 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 125549 /var/tmp/bdevperf.sock 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 125549 ']' 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:40.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.839 { 00:08:40.839 "params": { 00:08:40.839 "name": "Nvme$subsystem", 00:08:40.839 "trtype": "$TEST_TRANSPORT", 00:08:40.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.839 "adrfam": "ipv4", 00:08:40.839 "trsvcid": "$NVMF_PORT", 00:08:40.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.839 "hdgst": ${hdgst:-false}, 00:08:40.839 "ddgst": ${ddgst:-false} 00:08:40.839 }, 00:08:40.839 "method": "bdev_nvme_attach_controller" 00:08:40.839 } 00:08:40.839 EOF 00:08:40.839 )") 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:40.839 20:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.839 "params": { 00:08:40.839 "name": "Nvme0", 00:08:40.839 "trtype": "tcp", 00:08:40.839 "traddr": "10.0.0.2", 00:08:40.839 "adrfam": "ipv4", 00:08:40.839 "trsvcid": "4420", 00:08:40.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:40.839 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:40.839 "hdgst": false, 00:08:40.839 "ddgst": false 00:08:40.839 }, 00:08:40.839 "method": "bdev_nvme_attach_controller" 00:08:40.839 }' 00:08:40.839 [2024-11-18 20:09:52.757826] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:40.839 [2024-11-18 20:09:52.757903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125549 ] 00:08:40.839 [2024-11-18 20:09:52.826588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.097 [2024-11-18 20:09:52.874788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.097 Running I/O for 10 seconds... 00:08:41.355 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.355 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:41.355 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:41.355 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.355 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.355 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.355 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:41.355 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:41.355 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:41.355 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:41.355 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:41.355 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:41.355 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:41.355 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:41.356 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:41.356 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:41.356 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.356 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.356 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.356 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:41.356 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:41.356 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=559 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 559 -ge 100 ']' 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.616 [2024-11-18 20:09:53.486262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8e260 is same with the state(6) to be set 00:08:41.616 [2024-11-18 20:09:53.486405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8e260 is same with the state(6) to be set 00:08:41.616 [2024-11-18 20:09:53.489296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:41.616 [2024-11-18 20:09:53.489341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.489360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:41.616 [2024-11-18 20:09:53.489375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.489389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:41.616 [2024-11-18 20:09:53.489403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.489417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:41.616 [2024-11-18 20:09:53.489430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.489446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1bd70 is same with the state(6) to be set 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.616 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.616 [2024-11-18 20:09:53.495514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.495540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.495586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.495602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.495618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.495632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.495658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.495673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.495688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.495703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.495718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.495742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.495759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.495773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.495787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.495801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.495815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.495829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.495844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.495857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.495872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.495886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.495901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.495914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.495929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.495943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.495958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.495971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.495986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.496000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.496015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.496031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.496046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.496059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.496074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.496087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.496105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.496120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.496135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.496148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.496163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.496176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.496191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.616 [2024-11-18 20:09:53.496205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.616 [2024-11-18 20:09:53.496219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.496980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.496995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.497012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.497027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.497040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.497055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.497069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.497084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.497098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.497113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.497126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.497141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.497155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.497169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.497183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.497197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.497215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.497230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.497244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.497259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.497272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.497287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.497301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.497316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.617 [2024-11-18 20:09:53.497336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.617 [2024-11-18 20:09:53.497351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.618 [2024-11-18 20:09:53.497365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.618 [2024-11-18 20:09:53.497380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.618 [2024-11-18 20:09:53.497394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.618 [2024-11-18 20:09:53.497409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.618 [2024-11-18 20:09:53.497423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.618 [2024-11-18 20:09:53.497438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.618 [2024-11-18 20:09:53.497451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.618 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.618 20:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:41.618 [2024-11-18 20:09:53.498646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:41.618 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:41.618 00:08:41.618 Latency(us) 00:08:41.618 [2024-11-18T19:09:53.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.618 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:41.618 Job: Nvme0n1 ended in about 0.41 seconds with error 00:08:41.618 Verification LBA range: start 0x0 length 0x400 00:08:41.618 Nvme0n1 : 0.41 1573.41 98.34 157.34 0.00 35919.07 2378.71 35340.89 00:08:41.618 [2024-11-18T19:09:53.626Z] =================================================================================================================== 00:08:41.618 [2024-11-18T19:09:53.626Z] Total : 1573.41 98.34 157.34 0.00 35919.07 2378.71 35340.89 00:08:41.618 [2024-11-18 20:09:53.500517] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.618 [2024-11-18 20:09:53.500565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1bd70 (9): Bad file descriptor 00:08:41.618 [2024-11-18 20:09:53.550888] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:42.551 20:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 125549 00:08:42.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (125549) - No such process 00:08:42.551 20:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:42.551 20:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:42.551 20:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:42.551 20:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:42.551 20:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:42.551 20:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:42.551 20:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:42.551 20:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:42.551 { 00:08:42.551 "params": { 00:08:42.551 "name": "Nvme$subsystem", 00:08:42.551 "trtype": "$TEST_TRANSPORT", 00:08:42.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.551 "adrfam": "ipv4", 00:08:42.551 "trsvcid": "$NVMF_PORT", 00:08:42.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.551 "hdgst": ${hdgst:-false}, 00:08:42.551 "ddgst": ${ddgst:-false} 00:08:42.551 }, 00:08:42.551 "method": "bdev_nvme_attach_controller" 00:08:42.551 } 00:08:42.551 EOF 00:08:42.551 )") 00:08:42.551 20:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:42.551 20:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:42.551 20:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:42.551 20:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:42.551 "params": { 00:08:42.551 "name": "Nvme0", 00:08:42.551 "trtype": "tcp", 00:08:42.551 "traddr": "10.0.0.2", 00:08:42.551 "adrfam": "ipv4", 00:08:42.551 "trsvcid": "4420", 00:08:42.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:42.551 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:42.551 "hdgst": false, 00:08:42.551 "ddgst": false 00:08:42.551 }, 00:08:42.551 "method": "bdev_nvme_attach_controller" 00:08:42.551 }' 00:08:42.551 [2024-11-18 20:09:54.547818] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:42.551 [2024-11-18 20:09:54.547899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125831 ] 00:08:42.809 [2024-11-18 20:09:54.617121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.809 [2024-11-18 20:09:54.666025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.068 Running I/O for 1 seconds... 00:08:44.003 1664.00 IOPS, 104.00 MiB/s 00:08:44.004 Latency(us) 00:08:44.004 [2024-11-18T19:09:56.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.004 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:44.004 Verification LBA range: start 0x0 length 0x400 00:08:44.004 Nvme0n1 : 1.02 1698.82 106.18 0.00 0.00 37058.50 6990.51 33981.63 00:08:44.004 [2024-11-18T19:09:56.012Z] =================================================================================================================== 00:08:44.004 [2024-11-18T19:09:56.012Z] Total : 1698.82 106.18 0.00 0.00 37058.50 6990.51 33981.63 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.262 rmmod nvme_tcp 00:08:44.262 rmmod nvme_fabrics 00:08:44.262 rmmod nvme_keyring 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 125502 ']' 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 125502 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 125502 ']' 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 125502 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125502 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125502' 00:08:44.262 killing process with pid 125502 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 125502 00:08:44.262 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 125502 00:08:44.523 [2024-11-18 20:09:56.396121] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:44.523 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.523 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.523 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.523 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:44.523 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:44.523 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.523 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.523 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.523 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:44.523 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.523 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.523 20:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:47.067 00:08:47.067 real 0m8.638s 00:08:47.067 user 0m18.859s 00:08:47.067 sys 0m2.772s 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.067 ************************************ 00:08:47.067 END TEST nvmf_host_management 00:08:47.067 ************************************ 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.067 ************************************ 00:08:47.067 START TEST nvmf_lvol 00:08:47.067 ************************************ 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:47.067 * Looking for test storage... 00:08:47.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.067 --rc genhtml_branch_coverage=1 00:08:47.067 --rc genhtml_function_coverage=1 00:08:47.067 --rc genhtml_legend=1 00:08:47.067 --rc geninfo_all_blocks=1 00:08:47.067 --rc geninfo_unexecuted_blocks=1 00:08:47.067 00:08:47.067 ' 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.067 --rc genhtml_branch_coverage=1 00:08:47.067 --rc genhtml_function_coverage=1 00:08:47.067 --rc genhtml_legend=1 00:08:47.067 --rc geninfo_all_blocks=1 00:08:47.067 --rc geninfo_unexecuted_blocks=1 00:08:47.067 00:08:47.067 ' 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.067 --rc genhtml_branch_coverage=1 00:08:47.067 --rc genhtml_function_coverage=1 00:08:47.067 --rc genhtml_legend=1 00:08:47.067 --rc geninfo_all_blocks=1 00:08:47.067 --rc geninfo_unexecuted_blocks=1 00:08:47.067 00:08:47.067 ' 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.067 --rc genhtml_branch_coverage=1 00:08:47.067 --rc genhtml_function_coverage=1 00:08:47.067 --rc genhtml_legend=1 00:08:47.067 --rc geninfo_all_blocks=1 00:08:47.067 --rc geninfo_unexecuted_blocks=1 00:08:47.067 00:08:47.067 ' 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.067 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.068 20:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:48.999 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:48.999 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:48.999 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:48.999 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:48.999 20:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.258 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.258 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.258 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:49.258 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:49.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:08:49.258 00:08:49.258 --- 10.0.0.2 ping statistics --- 00:08:49.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.258 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:08:49.258 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:08:49.258 00:08:49.258 --- 10.0.0.1 ping statistics --- 00:08:49.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.259 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=127933 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 127933 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 127933 ']' 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.259 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:49.259 [2024-11-18 20:10:01.117097] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:49.259 [2024-11-18 20:10:01.117181] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.259 [2024-11-18 20:10:01.188225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:49.259 [2024-11-18 20:10:01.232048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.259 [2024-11-18 20:10:01.232101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.259 [2024-11-18 20:10:01.232128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.259 [2024-11-18 20:10:01.232139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.259 [2024-11-18 20:10:01.232149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.259 [2024-11-18 20:10:01.233735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.259 [2024-11-18 20:10:01.233763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.259 [2024-11-18 20:10:01.233767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.517 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.517 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:49.517 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.517 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.517 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:49.517 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.517 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:49.775 [2024-11-18 20:10:01.611816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.775 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.034 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:50.034 20:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.292 20:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:50.292 20:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:50.550 20:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:50.808 20:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a619aafd-ff58-4ab6-9a30-8f5d4d11f191 00:08:50.808 20:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a619aafd-ff58-4ab6-9a30-8f5d4d11f191 lvol 20 00:08:51.066 20:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=90ce06fd-85ff-4de6-ab58-c50c6917339f 00:08:51.066 20:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:51.325 20:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 90ce06fd-85ff-4de6-ab58-c50c6917339f 00:08:51.583 20:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:51.842 [2024-11-18 20:10:03.838975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.100 20:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:52.358 20:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=128368 00:08:52.358 20:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:52.358 20:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:53.295 20:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 90ce06fd-85ff-4de6-ab58-c50c6917339f MY_SNAPSHOT 00:08:53.553 20:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8f80592c-aa59-43d1-ab50-b4809eb877b4 00:08:53.553 20:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 90ce06fd-85ff-4de6-ab58-c50c6917339f 30 00:08:53.812 20:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8f80592c-aa59-43d1-ab50-b4809eb877b4 MY_CLONE 00:08:54.381 20:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1e84bfea-340f-461d-8274-947ed98bd3c6 00:08:54.381 20:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1e84bfea-340f-461d-8274-947ed98bd3c6 00:08:54.956 20:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 128368 00:09:03.074 Initializing NVMe Controllers 00:09:03.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:03.074 Controller IO queue size 128, less than required. 00:09:03.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:03.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:03.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:03.074 Initialization complete. Launching workers. 00:09:03.074 ======================================================== 00:09:03.074 Latency(us) 00:09:03.074 Device Information : IOPS MiB/s Average min max 00:09:03.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10544.98 41.19 12150.05 677.84 73644.87 00:09:03.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10432.08 40.75 12278.64 2422.13 66879.75 00:09:03.074 ======================================================== 00:09:03.074 Total : 20977.06 81.94 12214.00 677.84 73644.87 00:09:03.074 00:09:03.074 20:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:03.074 20:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 90ce06fd-85ff-4de6-ab58-c50c6917339f 00:09:03.333 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a619aafd-ff58-4ab6-9a30-8f5d4d11f191 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:03.592 rmmod nvme_tcp 00:09:03.592 rmmod nvme_fabrics 00:09:03.592 rmmod nvme_keyring 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 127933 ']' 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 127933 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 127933 ']' 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 127933 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 127933 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 127933' 00:09:03.592 killing process with pid 127933 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 127933 00:09:03.592 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 127933 00:09:03.853 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:03.853 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:03.853 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:03.853 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:03.853 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:03.853 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:03.853 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:03.853 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.853 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:03.853 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.853 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.853 20:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.763 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.763 00:09:05.763 real 0m19.233s 00:09:05.763 user 1m5.146s 00:09:05.763 sys 0m5.747s 00:09:05.763 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.763 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:05.763 ************************************ 00:09:05.763 END TEST nvmf_lvol 00:09:05.763 ************************************ 00:09:06.022 20:10:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.023 ************************************ 00:09:06.023 START TEST nvmf_lvs_grow 00:09:06.023 ************************************ 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:06.023 * Looking for test storage... 00:09:06.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:06.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.023 --rc genhtml_branch_coverage=1 00:09:06.023 --rc genhtml_function_coverage=1 00:09:06.023 --rc genhtml_legend=1 00:09:06.023 --rc geninfo_all_blocks=1 00:09:06.023 --rc geninfo_unexecuted_blocks=1 00:09:06.023 00:09:06.023 ' 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:06.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.023 --rc genhtml_branch_coverage=1 00:09:06.023 --rc genhtml_function_coverage=1 00:09:06.023 --rc genhtml_legend=1 00:09:06.023 --rc geninfo_all_blocks=1 00:09:06.023 --rc geninfo_unexecuted_blocks=1 00:09:06.023 00:09:06.023 ' 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:06.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.023 --rc genhtml_branch_coverage=1 00:09:06.023 --rc genhtml_function_coverage=1 00:09:06.023 --rc genhtml_legend=1 00:09:06.023 --rc geninfo_all_blocks=1 00:09:06.023 --rc geninfo_unexecuted_blocks=1 00:09:06.023 00:09:06.023 ' 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:06.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.023 --rc genhtml_branch_coverage=1 00:09:06.023 --rc genhtml_function_coverage=1 00:09:06.023 --rc genhtml_legend=1 00:09:06.023 --rc geninfo_all_blocks=1 00:09:06.023 --rc geninfo_unexecuted_blocks=1 00:09:06.023 00:09:06.023 ' 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.023 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.024 20:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.559 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:08.560 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:08.560 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:08.560 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:08.560 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:09:08.560 00:09:08.560 --- 10.0.0.2 ping statistics --- 00:09:08.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.560 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:09:08.560 00:09:08.560 --- 10.0.0.1 ping statistics --- 00:09:08.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.560 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.560 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.561 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=131729 00:09:08.561 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:08.561 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 131729 00:09:08.561 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 131729 ']' 00:09:08.561 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.561 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.561 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.561 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.561 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.561 [2024-11-18 20:10:20.371759] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:08.561 [2024-11-18 20:10:20.371846] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.561 [2024-11-18 20:10:20.444976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.561 [2024-11-18 20:10:20.489818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.561 [2024-11-18 20:10:20.489881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.561 [2024-11-18 20:10:20.489915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.561 [2024-11-18 20:10:20.489927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.561 [2024-11-18 20:10:20.489939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.561 [2024-11-18 20:10:20.490512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.819 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.819 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:08.819 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:08.819 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.819 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.819 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.819 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:09.077 [2024-11-18 20:10:20.891212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.077 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:09.077 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.077 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.077 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.077 ************************************ 00:09:09.077 START TEST lvs_grow_clean 00:09:09.077 ************************************ 00:09:09.077 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:09.077 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:09.077 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:09.077 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:09.077 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:09.077 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:09.078 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:09.078 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.078 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.078 20:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:09.336 20:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:09.336 20:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:09.595 20:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=26d76a2f-f13b-49cd-a62e-054a262f2286 00:09:09.595 20:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d76a2f-f13b-49cd-a62e-054a262f2286 00:09:09.595 20:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:09.854 20:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:09.854 20:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:09.854 20:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 26d76a2f-f13b-49cd-a62e-054a262f2286 lvol 150 00:09:10.111 20:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c0825deb-2211-41d1-b781-d389196773ee 00:09:10.111 20:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.111 20:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:10.369 [2024-11-18 20:10:22.296975] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:10.369 [2024-11-18 20:10:22.297046] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:10.369 true 00:09:10.369 20:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d76a2f-f13b-49cd-a62e-054a262f2286 00:09:10.370 20:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:10.628 20:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:10.628 20:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:10.886 20:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c0825deb-2211-41d1-b781-d389196773ee 00:09:11.454 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:11.454 [2024-11-18 20:10:23.428463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.454 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:11.712 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=132139 00:09:11.712 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:11.712 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:11.712 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 132139 /var/tmp/bdevperf.sock 00:09:11.712 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 132139 ']' 00:09:11.712 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:11.712 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.712 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:11.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:11.712 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.712 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:11.971 [2024-11-18 20:10:23.755294] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:11.971 [2024-11-18 20:10:23.755375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132139 ] 00:09:11.971 [2024-11-18 20:10:23.822357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.971 [2024-11-18 20:10:23.868766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.229 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.229 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:12.229 20:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:12.486 Nvme0n1 00:09:12.486 20:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:12.745 [ 00:09:12.745 { 00:09:12.745 "name": "Nvme0n1", 00:09:12.745 "aliases": [ 00:09:12.745 "c0825deb-2211-41d1-b781-d389196773ee" 00:09:12.745 ], 00:09:12.745 "product_name": "NVMe disk", 00:09:12.745 "block_size": 4096, 00:09:12.745 "num_blocks": 38912, 00:09:12.745 "uuid": "c0825deb-2211-41d1-b781-d389196773ee", 00:09:12.745 "numa_id": 0, 00:09:12.745 "assigned_rate_limits": { 00:09:12.745 "rw_ios_per_sec": 0, 00:09:12.745 "rw_mbytes_per_sec": 0, 00:09:12.745 "r_mbytes_per_sec": 0, 00:09:12.745 "w_mbytes_per_sec": 0 00:09:12.745 }, 00:09:12.745 "claimed": false, 00:09:12.745 "zoned": false, 00:09:12.745 "supported_io_types": { 00:09:12.745 "read": true, 00:09:12.745 "write": true, 00:09:12.745 "unmap": true, 00:09:12.745 "flush": true, 00:09:12.745 "reset": true, 00:09:12.745 "nvme_admin": true, 00:09:12.745 "nvme_io": true, 00:09:12.745 "nvme_io_md": false, 00:09:12.745 "write_zeroes": true, 00:09:12.745 "zcopy": false, 00:09:12.745 "get_zone_info": false, 00:09:12.745 "zone_management": false, 00:09:12.745 "zone_append": false, 00:09:12.745 "compare": true, 00:09:12.745 "compare_and_write": true, 00:09:12.745 "abort": true, 00:09:12.745 "seek_hole": false, 00:09:12.745 "seek_data": false, 00:09:12.745 "copy": true, 00:09:12.745 "nvme_iov_md": false 00:09:12.745 }, 00:09:12.745 "memory_domains": [ 00:09:12.745 { 00:09:12.745 "dma_device_id": "system", 00:09:12.745 "dma_device_type": 1 00:09:12.745 } 00:09:12.745 ], 00:09:12.745 "driver_specific": { 00:09:12.745 "nvme": [ 00:09:12.745 { 00:09:12.745 "trid": { 00:09:12.745 "trtype": "TCP", 00:09:12.745 "adrfam": "IPv4", 00:09:12.745 "traddr": "10.0.0.2", 00:09:12.745 "trsvcid": "4420", 00:09:12.745 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:12.745 }, 00:09:12.745 "ctrlr_data": { 00:09:12.745 "cntlid": 1, 00:09:12.745 "vendor_id": "0x8086", 00:09:12.745 "model_number": "SPDK bdev Controller", 00:09:12.745 "serial_number": "SPDK0", 00:09:12.745 "firmware_revision": "25.01", 00:09:12.745 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:12.745 "oacs": { 00:09:12.745 "security": 0, 00:09:12.745 "format": 0, 00:09:12.745 "firmware": 0, 00:09:12.745 "ns_manage": 0 00:09:12.745 }, 00:09:12.745 "multi_ctrlr": true, 00:09:12.745 "ana_reporting": false 00:09:12.745 }, 00:09:12.745 "vs": { 00:09:12.745 "nvme_version": "1.3" 00:09:12.745 }, 00:09:12.745 "ns_data": { 00:09:12.745 "id": 1, 00:09:12.745 "can_share": true 00:09:12.745 } 00:09:12.745 } 00:09:12.745 ], 00:09:12.745 "mp_policy": "active_passive" 00:09:12.745 } 00:09:12.745 } 00:09:12.745 ] 00:09:12.745 20:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=132229 00:09:12.745 20:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:12.745 20:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:12.745 Running I/O for 10 seconds... 00:09:14.120 Latency(us) 00:09:14.120 [2024-11-18T19:10:26.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.120 Nvme0n1 : 1.00 14798.00 57.80 0.00 0.00 0.00 0.00 0.00 00:09:14.120 [2024-11-18T19:10:26.128Z] =================================================================================================================== 00:09:14.120 [2024-11-18T19:10:26.128Z] Total : 14798.00 57.80 0.00 0.00 0.00 0.00 0.00 00:09:14.120 00:09:14.686 20:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 26d76a2f-f13b-49cd-a62e-054a262f2286 00:09:14.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.944 Nvme0n1 : 2.00 15019.00 58.67 0.00 0.00 0.00 0.00 0.00 00:09:14.944 [2024-11-18T19:10:26.952Z] =================================================================================================================== 00:09:14.944 [2024-11-18T19:10:26.952Z] Total : 15019.00 58.67 0.00 0.00 0.00 0.00 0.00 00:09:14.944 00:09:14.944 true 00:09:14.945 20:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d76a2f-f13b-49cd-a62e-054a262f2286 00:09:14.945 20:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:15.203 20:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:15.203 20:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:15.203 20:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 132229 00:09:15.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.769 Nvme0n1 : 3.00 15135.00 59.12 0.00 0.00 0.00 0.00 0.00 00:09:15.769 [2024-11-18T19:10:27.777Z] =================================================================================================================== 00:09:15.769 [2024-11-18T19:10:27.777Z] Total : 15135.00 59.12 0.00 0.00 0.00 0.00 0.00 00:09:15.769 00:09:16.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.706 Nvme0n1 : 4.00 15224.75 59.47 0.00 0.00 0.00 0.00 0.00 00:09:16.706 [2024-11-18T19:10:28.714Z] =================================================================================================================== 00:09:16.706 [2024-11-18T19:10:28.714Z] Total : 15224.75 59.47 0.00 0.00 0.00 0.00 0.00 00:09:16.706 00:09:18.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.080 Nvme0n1 : 5.00 15278.60 59.68 0.00 0.00 0.00 0.00 0.00 00:09:18.080 [2024-11-18T19:10:30.088Z] =================================================================================================================== 00:09:18.080 [2024-11-18T19:10:30.088Z] Total : 15278.60 59.68 0.00 0.00 0.00 0.00 0.00 00:09:18.080 00:09:19.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.014 Nvme0n1 : 6.00 15314.50 59.82 0.00 0.00 0.00 0.00 0.00 00:09:19.014 [2024-11-18T19:10:31.022Z] =================================================================================================================== 00:09:19.014 [2024-11-18T19:10:31.022Z] Total : 15314.50 59.82 0.00 0.00 0.00 0.00 0.00 00:09:19.014 00:09:19.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.951 Nvme0n1 : 7.00 15358.29 59.99 0.00 0.00 0.00 0.00 0.00 00:09:19.951 [2024-11-18T19:10:31.959Z] =================================================================================================================== 00:09:19.951 [2024-11-18T19:10:31.959Z] Total : 15358.29 59.99 0.00 0.00 0.00 0.00 0.00 00:09:19.951 00:09:20.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.884 Nvme0n1 : 8.00 15407.12 60.18 0.00 0.00 0.00 0.00 0.00 00:09:20.884 [2024-11-18T19:10:32.892Z] =================================================================================================================== 00:09:20.884 [2024-11-18T19:10:32.892Z] Total : 15407.12 60.18 0.00 0.00 0.00 0.00 0.00 00:09:20.884 00:09:21.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.819 Nvme0n1 : 9.00 15445.33 60.33 0.00 0.00 0.00 0.00 0.00 00:09:21.819 [2024-11-18T19:10:33.827Z] =================================================================================================================== 00:09:21.819 [2024-11-18T19:10:33.827Z] Total : 15445.33 60.33 0.00 0.00 0.00 0.00 0.00 00:09:21.819 00:09:22.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.753 Nvme0n1 : 10.00 15463.20 60.40 0.00 0.00 0.00 0.00 0.00 00:09:22.753 [2024-11-18T19:10:34.761Z] =================================================================================================================== 00:09:22.753 [2024-11-18T19:10:34.761Z] Total : 15463.20 60.40 0.00 0.00 0.00 0.00 0.00 00:09:22.753 00:09:22.753 00:09:22.753 Latency(us) 00:09:22.753 [2024-11-18T19:10:34.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.753 Nvme0n1 : 10.00 15468.86 60.43 0.00 0.00 8270.08 2111.72 16796.63 00:09:22.753 [2024-11-18T19:10:34.761Z] =================================================================================================================== 00:09:22.753 [2024-11-18T19:10:34.761Z] Total : 15468.86 60.43 0.00 0.00 8270.08 2111.72 16796.63 00:09:22.753 { 00:09:22.753 "results": [ 00:09:22.753 { 00:09:22.753 "job": "Nvme0n1", 00:09:22.753 "core_mask": "0x2", 00:09:22.753 "workload": "randwrite", 00:09:22.753 "status": "finished", 00:09:22.753 "queue_depth": 128, 00:09:22.753 "io_size": 4096, 00:09:22.753 "runtime": 10.004617, 00:09:22.753 "iops": 15468.858028248358, 00:09:22.753 "mibps": 60.42522667284515, 00:09:22.753 "io_failed": 0, 00:09:22.753 "io_timeout": 0, 00:09:22.753 "avg_latency_us": 8270.081649272948, 00:09:22.753 "min_latency_us": 2111.7155555555555, 00:09:22.753 "max_latency_us": 16796.634074074074 00:09:22.753 } 00:09:22.753 ], 00:09:22.753 "core_count": 1 00:09:22.753 } 00:09:22.753 20:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 132139 00:09:22.753 20:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 132139 ']' 00:09:22.753 20:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 132139 00:09:22.753 20:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:22.753 20:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.753 20:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132139 00:09:23.012 20:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:23.012 20:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:23.012 20:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132139' 00:09:23.012 killing process with pid 132139 00:09:23.012 20:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 132139 00:09:23.012 Received shutdown signal, test time was about 10.000000 seconds 00:09:23.012 00:09:23.012 Latency(us) 00:09:23.012 [2024-11-18T19:10:35.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.012 [2024-11-18T19:10:35.020Z] =================================================================================================================== 00:09:23.012 [2024-11-18T19:10:35.020Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:23.012 20:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 132139 00:09:23.012 20:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:23.269 20:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.527 20:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d76a2f-f13b-49cd-a62e-054a262f2286 00:09:23.527 20:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:24.092 20:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:24.092 20:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:24.092 20:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:24.092 [2024-11-18 20:10:36.069516] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:24.092 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d76a2f-f13b-49cd-a62e-054a262f2286 00:09:24.092 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:24.092 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d76a2f-f13b-49cd-a62e-054a262f2286 00:09:24.092 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.350 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.350 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.350 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.350 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.350 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.350 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.350 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:24.350 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d76a2f-f13b-49cd-a62e-054a262f2286 00:09:24.608 request: 00:09:24.608 { 00:09:24.608 "uuid": "26d76a2f-f13b-49cd-a62e-054a262f2286", 00:09:24.608 "method": "bdev_lvol_get_lvstores", 00:09:24.608 "req_id": 1 00:09:24.608 } 00:09:24.608 Got JSON-RPC error response 00:09:24.608 response: 00:09:24.608 { 00:09:24.608 "code": -19, 00:09:24.608 "message": "No such device" 00:09:24.608 } 00:09:24.608 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:24.608 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:24.608 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:24.608 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:24.608 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.866 aio_bdev 00:09:24.866 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c0825deb-2211-41d1-b781-d389196773ee 00:09:24.866 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c0825deb-2211-41d1-b781-d389196773ee 00:09:24.866 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.866 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:24.866 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.866 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.866 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:25.123 20:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c0825deb-2211-41d1-b781-d389196773ee -t 2000 00:09:25.381 [ 00:09:25.381 { 00:09:25.381 "name": "c0825deb-2211-41d1-b781-d389196773ee", 00:09:25.381 "aliases": [ 00:09:25.381 "lvs/lvol" 00:09:25.381 ], 00:09:25.381 "product_name": "Logical Volume", 00:09:25.381 "block_size": 4096, 00:09:25.381 "num_blocks": 38912, 00:09:25.381 "uuid": "c0825deb-2211-41d1-b781-d389196773ee", 00:09:25.381 "assigned_rate_limits": { 00:09:25.381 "rw_ios_per_sec": 0, 00:09:25.381 "rw_mbytes_per_sec": 0, 00:09:25.381 "r_mbytes_per_sec": 0, 00:09:25.381 "w_mbytes_per_sec": 0 00:09:25.381 }, 00:09:25.381 "claimed": false, 00:09:25.381 "zoned": false, 00:09:25.381 "supported_io_types": { 00:09:25.381 "read": true, 00:09:25.381 "write": true, 00:09:25.381 "unmap": true, 00:09:25.381 "flush": false, 00:09:25.381 "reset": true, 00:09:25.381 "nvme_admin": false, 00:09:25.381 "nvme_io": false, 00:09:25.381 "nvme_io_md": false, 00:09:25.381 "write_zeroes": true, 00:09:25.381 "zcopy": false, 00:09:25.381 "get_zone_info": false, 00:09:25.381 "zone_management": false, 00:09:25.381 "zone_append": false, 00:09:25.381 "compare": false, 00:09:25.381 "compare_and_write": false, 00:09:25.381 "abort": false, 00:09:25.381 "seek_hole": true, 00:09:25.381 "seek_data": true, 00:09:25.381 "copy": false, 00:09:25.381 "nvme_iov_md": false 00:09:25.381 }, 00:09:25.381 "driver_specific": { 00:09:25.381 "lvol": { 00:09:25.381 "lvol_store_uuid": "26d76a2f-f13b-49cd-a62e-054a262f2286", 00:09:25.381 "base_bdev": "aio_bdev", 00:09:25.381 "thin_provision": false, 00:09:25.381 "num_allocated_clusters": 38, 00:09:25.381 "snapshot": false, 00:09:25.381 "clone": false, 00:09:25.381 "esnap_clone": false 00:09:25.381 } 00:09:25.381 } 00:09:25.381 } 00:09:25.381 ] 00:09:25.381 20:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:25.381 20:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d76a2f-f13b-49cd-a62e-054a262f2286 00:09:25.381 20:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:25.639 20:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:25.640 20:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d76a2f-f13b-49cd-a62e-054a262f2286 00:09:25.640 20:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:25.897 20:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:25.897 20:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c0825deb-2211-41d1-b781-d389196773ee 00:09:26.155 20:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 26d76a2f-f13b-49cd-a62e-054a262f2286 00:09:26.413 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:26.671 00:09:26.671 real 0m17.639s 00:09:26.671 user 0m17.194s 00:09:26.671 sys 0m1.818s 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:26.671 ************************************ 00:09:26.671 END TEST lvs_grow_clean 00:09:26.671 ************************************ 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:26.671 ************************************ 00:09:26.671 START TEST lvs_grow_dirty 00:09:26.671 ************************************ 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:26.671 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.928 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:26.928 20:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:27.492 20:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 00:09:27.492 20:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 00:09:27.492 20:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:27.492 20:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:27.492 20:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:27.492 20:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 lvol 150 00:09:27.749 20:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e22141b0-8561-468c-829a-5f8e775e1192 00:09:27.749 20:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:27.749 20:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:28.006 [2024-11-18 20:10:39.987041] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:28.006 [2024-11-18 20:10:39.987134] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:28.006 true 00:09:28.006 20:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 00:09:28.006 20:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:28.570 20:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:28.570 20:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:28.570 20:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e22141b0-8561-468c-829a-5f8e775e1192 00:09:29.138 20:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:29.138 [2024-11-18 20:10:41.110477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.138 20:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:29.396 20:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=134283 00:09:29.396 20:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:29.396 20:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 134283 /var/tmp/bdevperf.sock 00:09:29.396 20:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 134283 ']' 00:09:29.396 20:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:29.396 20:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:29.396 20:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.396 20:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:29.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:29.396 20:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.396 20:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:29.655 [2024-11-18 20:10:41.444376] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:29.655 [2024-11-18 20:10:41.444452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134283 ] 00:09:29.655 [2024-11-18 20:10:41.509799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.655 [2024-11-18 20:10:41.559060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.914 20:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.914 20:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:29.914 20:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:30.172 Nvme0n1 00:09:30.172 20:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:30.431 [ 00:09:30.431 { 00:09:30.431 "name": "Nvme0n1", 00:09:30.431 "aliases": [ 00:09:30.431 "e22141b0-8561-468c-829a-5f8e775e1192" 00:09:30.431 ], 00:09:30.431 "product_name": "NVMe disk", 00:09:30.431 "block_size": 4096, 00:09:30.431 "num_blocks": 38912, 00:09:30.431 "uuid": "e22141b0-8561-468c-829a-5f8e775e1192", 00:09:30.431 "numa_id": 0, 00:09:30.431 "assigned_rate_limits": { 00:09:30.431 "rw_ios_per_sec": 0, 00:09:30.431 "rw_mbytes_per_sec": 0, 00:09:30.431 "r_mbytes_per_sec": 0, 00:09:30.431 "w_mbytes_per_sec": 0 00:09:30.431 }, 00:09:30.431 "claimed": false, 00:09:30.431 "zoned": false, 00:09:30.431 "supported_io_types": { 00:09:30.431 "read": true, 00:09:30.431 "write": true, 00:09:30.431 "unmap": true, 00:09:30.431 "flush": true, 00:09:30.431 "reset": true, 00:09:30.431 "nvme_admin": true, 00:09:30.431 "nvme_io": true, 00:09:30.431 "nvme_io_md": false, 00:09:30.431 "write_zeroes": true, 00:09:30.431 "zcopy": false, 00:09:30.431 "get_zone_info": false, 00:09:30.431 "zone_management": false, 00:09:30.431 "zone_append": false, 00:09:30.431 "compare": true, 00:09:30.431 "compare_and_write": true, 00:09:30.431 "abort": true, 00:09:30.431 "seek_hole": false, 00:09:30.431 "seek_data": false, 00:09:30.431 "copy": true, 00:09:30.431 "nvme_iov_md": false 00:09:30.431 }, 00:09:30.431 "memory_domains": [ 00:09:30.431 { 00:09:30.431 "dma_device_id": "system", 00:09:30.431 "dma_device_type": 1 00:09:30.431 } 00:09:30.431 ], 00:09:30.431 "driver_specific": { 00:09:30.431 "nvme": [ 00:09:30.431 { 00:09:30.431 "trid": { 00:09:30.431 "trtype": "TCP", 00:09:30.432 "adrfam": "IPv4", 00:09:30.432 "traddr": "10.0.0.2", 00:09:30.432 "trsvcid": "4420", 00:09:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:30.432 }, 00:09:30.432 "ctrlr_data": { 00:09:30.432 "cntlid": 1, 00:09:30.432 "vendor_id": "0x8086", 00:09:30.432 "model_number": "SPDK bdev Controller", 00:09:30.432 "serial_number": "SPDK0", 00:09:30.432 "firmware_revision": "25.01", 00:09:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:30.432 "oacs": { 00:09:30.432 "security": 0, 00:09:30.432 "format": 0, 00:09:30.432 "firmware": 0, 00:09:30.432 "ns_manage": 0 00:09:30.432 }, 00:09:30.432 "multi_ctrlr": true, 00:09:30.432 "ana_reporting": false 00:09:30.432 }, 00:09:30.432 "vs": { 00:09:30.432 "nvme_version": "1.3" 00:09:30.432 }, 00:09:30.432 "ns_data": { 00:09:30.432 "id": 1, 00:09:30.432 "can_share": true 00:09:30.432 } 00:09:30.432 } 00:09:30.432 ], 00:09:30.432 "mp_policy": "active_passive" 00:09:30.432 } 00:09:30.432 } 00:09:30.432 ] 00:09:30.432 20:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=134417 00:09:30.432 20:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:30.432 20:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:30.432 Running I/O for 10 seconds... 00:09:31.810 Latency(us) 00:09:31.810 [2024-11-18T19:10:43.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.810 Nvme0n1 : 1.00 14736.00 57.56 0.00 0.00 0.00 0.00 0.00 00:09:31.810 [2024-11-18T19:10:43.818Z] =================================================================================================================== 00:09:31.810 [2024-11-18T19:10:43.818Z] Total : 14736.00 57.56 0.00 0.00 0.00 0.00 0.00 00:09:31.810 00:09:32.379 20:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 00:09:32.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.638 Nvme0n1 : 2.00 15022.50 58.68 0.00 0.00 0.00 0.00 0.00 00:09:32.638 [2024-11-18T19:10:44.646Z] =================================================================================================================== 00:09:32.638 [2024-11-18T19:10:44.646Z] Total : 15022.50 58.68 0.00 0.00 0.00 0.00 0.00 00:09:32.638 00:09:32.638 true 00:09:32.638 20:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 00:09:32.638 20:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:32.897 20:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:32.897 20:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:32.897 20:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 134417 00:09:33.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.466 Nvme0n1 : 3.00 15160.00 59.22 0.00 0.00 0.00 0.00 0.00 00:09:33.466 [2024-11-18T19:10:45.474Z] =================================================================================================================== 00:09:33.466 [2024-11-18T19:10:45.474Z] Total : 15160.00 59.22 0.00 0.00 0.00 0.00 0.00 00:09:33.466 00:09:34.402 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.402 Nvme0n1 : 4.00 15245.00 59.55 0.00 0.00 0.00 0.00 0.00 00:09:34.402 [2024-11-18T19:10:46.410Z] =================================================================================================================== 00:09:34.402 [2024-11-18T19:10:46.410Z] Total : 15245.00 59.55 0.00 0.00 0.00 0.00 0.00 00:09:34.402 00:09:35.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.779 Nvme0n1 : 5.00 15296.60 59.75 0.00 0.00 0.00 0.00 0.00 00:09:35.779 [2024-11-18T19:10:47.787Z] =================================================================================================================== 00:09:35.779 [2024-11-18T19:10:47.787Z] Total : 15296.60 59.75 0.00 0.00 0.00 0.00 0.00 00:09:35.779 00:09:36.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.713 Nvme0n1 : 6.00 15351.17 59.97 0.00 0.00 0.00 0.00 0.00 00:09:36.713 [2024-11-18T19:10:48.721Z] =================================================================================================================== 00:09:36.713 [2024-11-18T19:10:48.721Z] Total : 15351.17 59.97 0.00 0.00 0.00 0.00 0.00 00:09:36.713 00:09:37.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.656 Nvme0n1 : 7.00 15410.86 60.20 0.00 0.00 0.00 0.00 0.00 00:09:37.656 [2024-11-18T19:10:49.664Z] =================================================================================================================== 00:09:37.656 [2024-11-18T19:10:49.664Z] Total : 15410.86 60.20 0.00 0.00 0.00 0.00 0.00 00:09:37.656 00:09:38.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.591 Nvme0n1 : 8.00 15437.50 60.30 0.00 0.00 0.00 0.00 0.00 00:09:38.591 [2024-11-18T19:10:50.599Z] =================================================================================================================== 00:09:38.591 [2024-11-18T19:10:50.599Z] Total : 15437.50 60.30 0.00 0.00 0.00 0.00 0.00 00:09:38.591 00:09:39.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.526 Nvme0n1 : 9.00 15455.89 60.37 0.00 0.00 0.00 0.00 0.00 00:09:39.526 [2024-11-18T19:10:51.534Z] =================================================================================================================== 00:09:39.526 [2024-11-18T19:10:51.534Z] Total : 15455.89 60.37 0.00 0.00 0.00 0.00 0.00 00:09:39.526 00:09:40.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.461 Nvme0n1 : 10.00 15486.90 60.50 0.00 0.00 0.00 0.00 0.00 00:09:40.461 [2024-11-18T19:10:52.469Z] =================================================================================================================== 00:09:40.461 [2024-11-18T19:10:52.469Z] Total : 15486.90 60.50 0.00 0.00 0.00 0.00 0.00 00:09:40.461 00:09:40.461 00:09:40.461 Latency(us) 00:09:40.461 [2024-11-18T19:10:52.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.461 Nvme0n1 : 10.00 15493.83 60.52 0.00 0.00 8256.58 2293.76 19612.25 00:09:40.461 [2024-11-18T19:10:52.469Z] =================================================================================================================== 00:09:40.461 [2024-11-18T19:10:52.469Z] Total : 15493.83 60.52 0.00 0.00 8256.58 2293.76 19612.25 00:09:40.461 { 00:09:40.461 "results": [ 00:09:40.461 { 00:09:40.461 "job": "Nvme0n1", 00:09:40.461 "core_mask": "0x2", 00:09:40.461 "workload": "randwrite", 00:09:40.461 "status": "finished", 00:09:40.461 "queue_depth": 128, 00:09:40.461 "io_size": 4096, 00:09:40.461 "runtime": 10.003786, 00:09:40.461 "iops": 15493.834034434563, 00:09:40.461 "mibps": 60.52278919701001, 00:09:40.461 "io_failed": 0, 00:09:40.461 "io_timeout": 0, 00:09:40.461 "avg_latency_us": 8256.583767762291, 00:09:40.461 "min_latency_us": 2293.76, 00:09:40.461 "max_latency_us": 19612.254814814816 00:09:40.461 } 00:09:40.461 ], 00:09:40.461 "core_count": 1 00:09:40.461 } 00:09:40.461 20:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 134283 00:09:40.461 20:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 134283 ']' 00:09:40.461 20:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 134283 00:09:40.461 20:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:40.461 20:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.461 20:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 134283 00:09:40.461 20:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:40.461 20:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:40.461 20:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 134283' 00:09:40.461 killing process with pid 134283 00:09:40.461 20:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 134283 00:09:40.461 Received shutdown signal, test time was about 10.000000 seconds 00:09:40.461 00:09:40.461 Latency(us) 00:09:40.461 [2024-11-18T19:10:52.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.461 [2024-11-18T19:10:52.469Z] =================================================================================================================== 00:09:40.461 [2024-11-18T19:10:52.469Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:40.461 20:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 134283 00:09:40.719 20:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:40.975 20:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:41.232 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 00:09:41.232 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:41.490 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:41.749 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:41.749 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 131729 00:09:41.749 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 131729 00:09:41.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 131729 Killed "${NVMF_APP[@]}" "$@" 00:09:41.749 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:41.749 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:41.749 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.749 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.749 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:41.750 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=135753 00:09:41.750 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:41.750 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 135753 00:09:41.750 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 135753 ']' 00:09:41.750 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.750 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.750 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.750 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.750 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:41.750 [2024-11-18 20:10:53.583574] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:41.750 [2024-11-18 20:10:53.583676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.750 [2024-11-18 20:10:53.654764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.750 [2024-11-18 20:10:53.700866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.750 [2024-11-18 20:10:53.700932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.750 [2024-11-18 20:10:53.700946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.750 [2024-11-18 20:10:53.700957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.750 [2024-11-18 20:10:53.700966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.750 [2024-11-18 20:10:53.701558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.008 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.008 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:42.008 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.008 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.008 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:42.008 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.008 20:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:42.267 [2024-11-18 20:10:54.094007] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:42.267 [2024-11-18 20:10:54.094138] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:42.267 [2024-11-18 20:10:54.094188] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:42.267 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:42.267 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e22141b0-8561-468c-829a-5f8e775e1192 00:09:42.267 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e22141b0-8561-468c-829a-5f8e775e1192 00:09:42.267 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.267 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:42.267 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.267 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.267 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:42.527 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e22141b0-8561-468c-829a-5f8e775e1192 -t 2000 00:09:42.785 [ 00:09:42.785 { 00:09:42.785 "name": "e22141b0-8561-468c-829a-5f8e775e1192", 00:09:42.785 "aliases": [ 00:09:42.785 "lvs/lvol" 00:09:42.785 ], 00:09:42.785 "product_name": "Logical Volume", 00:09:42.785 "block_size": 4096, 00:09:42.785 "num_blocks": 38912, 00:09:42.785 "uuid": "e22141b0-8561-468c-829a-5f8e775e1192", 00:09:42.785 "assigned_rate_limits": { 00:09:42.785 "rw_ios_per_sec": 0, 00:09:42.785 "rw_mbytes_per_sec": 0, 00:09:42.785 "r_mbytes_per_sec": 0, 00:09:42.785 "w_mbytes_per_sec": 0 00:09:42.785 }, 00:09:42.785 "claimed": false, 00:09:42.785 "zoned": false, 00:09:42.785 "supported_io_types": { 00:09:42.785 "read": true, 00:09:42.785 "write": true, 00:09:42.785 "unmap": true, 00:09:42.785 "flush": false, 00:09:42.785 "reset": true, 00:09:42.785 "nvme_admin": false, 00:09:42.785 "nvme_io": false, 00:09:42.785 "nvme_io_md": false, 00:09:42.785 "write_zeroes": true, 00:09:42.785 "zcopy": false, 00:09:42.785 "get_zone_info": false, 00:09:42.785 "zone_management": false, 00:09:42.785 "zone_append": false, 00:09:42.785 "compare": false, 00:09:42.786 "compare_and_write": false, 00:09:42.786 "abort": false, 00:09:42.786 "seek_hole": true, 00:09:42.786 "seek_data": true, 00:09:42.786 "copy": false, 00:09:42.786 "nvme_iov_md": false 00:09:42.786 }, 00:09:42.786 "driver_specific": { 00:09:42.786 "lvol": { 00:09:42.786 "lvol_store_uuid": "95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963", 00:09:42.786 "base_bdev": "aio_bdev", 00:09:42.786 "thin_provision": false, 00:09:42.786 "num_allocated_clusters": 38, 00:09:42.786 "snapshot": false, 00:09:42.786 "clone": false, 00:09:42.786 "esnap_clone": false 00:09:42.786 } 00:09:42.786 } 00:09:42.786 } 00:09:42.786 ] 00:09:42.786 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:42.786 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 00:09:42.786 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:43.045 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:43.045 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 00:09:43.045 20:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:43.304 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:43.304 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.564 [2024-11-18 20:10:55.460008] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:43.564 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 00:09:43.564 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:43.564 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 00:09:43.564 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.564 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.564 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.564 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.564 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.564 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.564 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.564 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:43.564 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 00:09:43.822 request: 00:09:43.822 { 00:09:43.822 "uuid": "95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963", 00:09:43.822 "method": "bdev_lvol_get_lvstores", 00:09:43.822 "req_id": 1 00:09:43.822 } 00:09:43.822 Got JSON-RPC error response 00:09:43.822 response: 00:09:43.822 { 00:09:43.822 "code": -19, 00:09:43.822 "message": "No such device" 00:09:43.822 } 00:09:43.822 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:43.822 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:43.822 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:43.822 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:43.822 20:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:44.080 aio_bdev 00:09:44.080 20:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e22141b0-8561-468c-829a-5f8e775e1192 00:09:44.080 20:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e22141b0-8561-468c-829a-5f8e775e1192 00:09:44.080 20:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.080 20:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:44.080 20:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.080 20:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.080 20:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:44.338 20:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e22141b0-8561-468c-829a-5f8e775e1192 -t 2000 00:09:44.597 [ 00:09:44.597 { 00:09:44.597 "name": "e22141b0-8561-468c-829a-5f8e775e1192", 00:09:44.597 "aliases": [ 00:09:44.597 "lvs/lvol" 00:09:44.597 ], 00:09:44.597 "product_name": "Logical Volume", 00:09:44.597 "block_size": 4096, 00:09:44.597 "num_blocks": 38912, 00:09:44.597 "uuid": "e22141b0-8561-468c-829a-5f8e775e1192", 00:09:44.597 "assigned_rate_limits": { 00:09:44.597 "rw_ios_per_sec": 0, 00:09:44.597 "rw_mbytes_per_sec": 0, 00:09:44.597 "r_mbytes_per_sec": 0, 00:09:44.597 "w_mbytes_per_sec": 0 00:09:44.597 }, 00:09:44.597 "claimed": false, 00:09:44.597 "zoned": false, 00:09:44.597 "supported_io_types": { 00:09:44.597 "read": true, 00:09:44.597 "write": true, 00:09:44.597 "unmap": true, 00:09:44.597 "flush": false, 00:09:44.597 "reset": true, 00:09:44.597 "nvme_admin": false, 00:09:44.597 "nvme_io": false, 00:09:44.597 "nvme_io_md": false, 00:09:44.597 "write_zeroes": true, 00:09:44.597 "zcopy": false, 00:09:44.597 "get_zone_info": false, 00:09:44.597 "zone_management": false, 00:09:44.597 "zone_append": false, 00:09:44.597 "compare": false, 00:09:44.597 "compare_and_write": false, 00:09:44.597 "abort": false, 00:09:44.597 "seek_hole": true, 00:09:44.597 "seek_data": true, 00:09:44.597 "copy": false, 00:09:44.597 "nvme_iov_md": false 00:09:44.597 }, 00:09:44.597 "driver_specific": { 00:09:44.597 "lvol": { 00:09:44.597 "lvol_store_uuid": "95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963", 00:09:44.597 "base_bdev": "aio_bdev", 00:09:44.597 "thin_provision": false, 00:09:44.597 "num_allocated_clusters": 38, 00:09:44.597 "snapshot": false, 00:09:44.597 "clone": false, 00:09:44.597 "esnap_clone": false 00:09:44.597 } 00:09:44.597 } 00:09:44.597 } 00:09:44.597 ] 00:09:44.597 20:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:44.597 20:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 00:09:44.597 20:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:44.855 20:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:44.855 20:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 00:09:44.855 20:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:45.113 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:45.113 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e22141b0-8561-468c-829a-5f8e775e1192 00:09:45.372 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 95ce5c54-cdf0-4ddd-bfa2-5d8fa1499963 00:09:45.939 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:45.939 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:46.197 00:09:46.197 real 0m19.318s 00:09:46.197 user 0m48.055s 00:09:46.197 sys 0m4.995s 00:09:46.197 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.197 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:46.197 ************************************ 00:09:46.197 END TEST lvs_grow_dirty 00:09:46.197 ************************************ 00:09:46.197 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:46.197 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:46.197 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:46.197 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:46.197 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:46.197 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:46.197 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:46.197 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:46.197 20:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:46.197 nvmf_trace.0 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:46.197 rmmod nvme_tcp 00:09:46.197 rmmod nvme_fabrics 00:09:46.197 rmmod nvme_keyring 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 135753 ']' 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 135753 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 135753 ']' 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 135753 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 135753 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 135753' 00:09:46.197 killing process with pid 135753 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 135753 00:09:46.197 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 135753 00:09:46.459 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:46.459 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:46.459 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:46.459 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:46.459 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:46.459 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:46.459 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:46.459 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:46.459 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:46.459 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.459 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.459 20:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.372 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:48.372 00:09:48.372 real 0m42.551s 00:09:48.372 user 1m11.267s 00:09:48.372 sys 0m8.883s 00:09:48.372 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.372 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:48.372 ************************************ 00:09:48.372 END TEST nvmf_lvs_grow 00:09:48.372 ************************************ 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:48.631 ************************************ 00:09:48.631 START TEST nvmf_bdev_io_wait 00:09:48.631 ************************************ 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:48.631 * Looking for test storage... 00:09:48.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:48.631 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:48.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.632 --rc genhtml_branch_coverage=1 00:09:48.632 --rc genhtml_function_coverage=1 00:09:48.632 --rc genhtml_legend=1 00:09:48.632 --rc geninfo_all_blocks=1 00:09:48.632 --rc geninfo_unexecuted_blocks=1 00:09:48.632 00:09:48.632 ' 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:48.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.632 --rc genhtml_branch_coverage=1 00:09:48.632 --rc genhtml_function_coverage=1 00:09:48.632 --rc genhtml_legend=1 00:09:48.632 --rc geninfo_all_blocks=1 00:09:48.632 --rc geninfo_unexecuted_blocks=1 00:09:48.632 00:09:48.632 ' 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:48.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.632 --rc genhtml_branch_coverage=1 00:09:48.632 --rc genhtml_function_coverage=1 00:09:48.632 --rc genhtml_legend=1 00:09:48.632 --rc geninfo_all_blocks=1 00:09:48.632 --rc geninfo_unexecuted_blocks=1 00:09:48.632 00:09:48.632 ' 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:48.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.632 --rc genhtml_branch_coverage=1 00:09:48.632 --rc genhtml_function_coverage=1 00:09:48.632 --rc genhtml_legend=1 00:09:48.632 --rc geninfo_all_blocks=1 00:09:48.632 --rc geninfo_unexecuted_blocks=1 00:09:48.632 00:09:48.632 ' 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:48.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:48.632 20:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:51.174 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:51.174 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:51.174 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:51.174 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:51.174 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:51.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:09:51.175 00:09:51.175 --- 10.0.0.2 ping statistics --- 00:09:51.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.175 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:09:51.175 00:09:51.175 --- 10.0.0.1 ping statistics --- 00:09:51.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.175 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=138295 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 138295 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 138295 ']' 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.175 20:11:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.175 [2024-11-18 20:11:03.024458] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:51.175 [2024-11-18 20:11:03.024549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.175 [2024-11-18 20:11:03.097146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.175 [2024-11-18 20:11:03.146131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.175 [2024-11-18 20:11:03.146199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.175 [2024-11-18 20:11:03.146212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.175 [2024-11-18 20:11:03.146223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.175 [2024-11-18 20:11:03.146247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.175 [2024-11-18 20:11:03.147703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.175 [2024-11-18 20:11:03.147763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.175 [2024-11-18 20:11:03.147786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.175 [2024-11-18 20:11:03.147789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.434 [2024-11-18 20:11:03.363504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.434 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.434 Malloc0 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.435 [2024-11-18 20:11:03.415167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=138429 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=138433 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:51.435 { 00:09:51.435 "params": { 00:09:51.435 "name": "Nvme$subsystem", 00:09:51.435 "trtype": "$TEST_TRANSPORT", 00:09:51.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.435 "adrfam": "ipv4", 00:09:51.435 "trsvcid": "$NVMF_PORT", 00:09:51.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.435 "hdgst": ${hdgst:-false}, 00:09:51.435 "ddgst": ${ddgst:-false} 00:09:51.435 }, 00:09:51.435 "method": "bdev_nvme_attach_controller" 00:09:51.435 } 00:09:51.435 EOF 00:09:51.435 )") 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=138436 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:51.435 { 00:09:51.435 "params": { 00:09:51.435 "name": "Nvme$subsystem", 00:09:51.435 "trtype": "$TEST_TRANSPORT", 00:09:51.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.435 "adrfam": "ipv4", 00:09:51.435 "trsvcid": "$NVMF_PORT", 00:09:51.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.435 "hdgst": ${hdgst:-false}, 00:09:51.435 "ddgst": ${ddgst:-false} 00:09:51.435 }, 00:09:51.435 "method": "bdev_nvme_attach_controller" 00:09:51.435 } 00:09:51.435 EOF 00:09:51.435 )") 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=138441 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:51.435 { 00:09:51.435 "params": { 00:09:51.435 "name": "Nvme$subsystem", 00:09:51.435 "trtype": "$TEST_TRANSPORT", 00:09:51.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.435 "adrfam": "ipv4", 00:09:51.435 "trsvcid": "$NVMF_PORT", 00:09:51.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.435 "hdgst": ${hdgst:-false}, 00:09:51.435 "ddgst": ${ddgst:-false} 00:09:51.435 }, 00:09:51.435 "method": "bdev_nvme_attach_controller" 00:09:51.435 } 00:09:51.435 EOF 00:09:51.435 )") 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:51.435 { 00:09:51.435 "params": { 00:09:51.435 "name": "Nvme$subsystem", 00:09:51.435 "trtype": "$TEST_TRANSPORT", 00:09:51.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.435 "adrfam": "ipv4", 00:09:51.435 "trsvcid": "$NVMF_PORT", 00:09:51.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.435 "hdgst": ${hdgst:-false}, 00:09:51.435 "ddgst": ${ddgst:-false} 00:09:51.435 }, 00:09:51.435 "method": "bdev_nvme_attach_controller" 00:09:51.435 } 00:09:51.435 EOF 00:09:51.435 )") 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 138429 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:51.435 "params": { 00:09:51.435 "name": "Nvme1", 00:09:51.435 "trtype": "tcp", 00:09:51.435 "traddr": "10.0.0.2", 00:09:51.435 "adrfam": "ipv4", 00:09:51.435 "trsvcid": "4420", 00:09:51.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.435 "hdgst": false, 00:09:51.435 "ddgst": false 00:09:51.435 }, 00:09:51.435 "method": "bdev_nvme_attach_controller" 00:09:51.435 }' 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:51.435 "params": { 00:09:51.435 "name": "Nvme1", 00:09:51.435 "trtype": "tcp", 00:09:51.435 "traddr": "10.0.0.2", 00:09:51.435 "adrfam": "ipv4", 00:09:51.435 "trsvcid": "4420", 00:09:51.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.435 "hdgst": false, 00:09:51.435 "ddgst": false 00:09:51.435 }, 00:09:51.435 "method": "bdev_nvme_attach_controller" 00:09:51.435 }' 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:51.435 "params": { 00:09:51.435 "name": "Nvme1", 00:09:51.435 "trtype": "tcp", 00:09:51.435 "traddr": "10.0.0.2", 00:09:51.435 "adrfam": "ipv4", 00:09:51.435 "trsvcid": "4420", 00:09:51.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.435 "hdgst": false, 00:09:51.435 "ddgst": false 00:09:51.435 }, 00:09:51.435 "method": "bdev_nvme_attach_controller" 00:09:51.435 }' 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:51.435 20:11:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:51.435 "params": { 00:09:51.435 "name": "Nvme1", 00:09:51.435 "trtype": "tcp", 00:09:51.435 "traddr": "10.0.0.2", 00:09:51.435 "adrfam": "ipv4", 00:09:51.435 "trsvcid": "4420", 00:09:51.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.436 "hdgst": false, 00:09:51.436 "ddgst": false 00:09:51.436 }, 00:09:51.436 "method": "bdev_nvme_attach_controller" 00:09:51.436 }' 00:09:51.695 [2024-11-18 20:11:03.465314] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:51.695 [2024-11-18 20:11:03.465316] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:51.695 [2024-11-18 20:11:03.465342] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:51.695 [2024-11-18 20:11:03.465342] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:51.695 [2024-11-18 20:11:03.465404] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 20:11:03.465404] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:51.695 --proc-type=auto ] 00:09:51.695 [2024-11-18 20:11:03.465424] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 20:11:03.465425] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:51.695 --proc-type=auto ] 00:09:51.695 [2024-11-18 20:11:03.646278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.695 [2024-11-18 20:11:03.688065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:51.954 [2024-11-18 20:11:03.746230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.954 [2024-11-18 20:11:03.790643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:51.954 [2024-11-18 20:11:03.820177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.954 [2024-11-18 20:11:03.857073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:51.954 [2024-11-18 20:11:03.894816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.954 [2024-11-18 20:11:03.933993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:52.213 Running I/O for 1 seconds... 00:09:52.213 Running I/O for 1 seconds... 00:09:52.213 Running I/O for 1 seconds... 00:09:52.213 Running I/O for 1 seconds... 00:09:53.151 8772.00 IOPS, 34.27 MiB/s [2024-11-18T19:11:05.159Z] 6382.00 IOPS, 24.93 MiB/s 00:09:53.151 Latency(us) 00:09:53.151 [2024-11-18T19:11:05.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.151 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:53.151 Nvme1n1 : 1.01 8815.25 34.43 0.00 0.00 14445.27 8446.86 26214.40 00:09:53.151 [2024-11-18T19:11:05.159Z] =================================================================================================================== 00:09:53.151 [2024-11-18T19:11:05.159Z] Total : 8815.25 34.43 0.00 0.00 14445.27 8446.86 26214.40 00:09:53.151 00:09:53.151 Latency(us) 00:09:53.151 [2024-11-18T19:11:05.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.151 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:53.151 Nvme1n1 : 1.02 6420.25 25.08 0.00 0.00 19841.35 7815.77 26991.12 00:09:53.151 [2024-11-18T19:11:05.159Z] =================================================================================================================== 00:09:53.151 [2024-11-18T19:11:05.159Z] Total : 6420.25 25.08 0.00 0.00 19841.35 7815.77 26991.12 00:09:53.151 200448.00 IOPS, 783.00 MiB/s 00:09:53.151 Latency(us) 00:09:53.151 [2024-11-18T19:11:05.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.151 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:53.151 Nvme1n1 : 1.00 200070.06 781.52 0.00 0.00 636.20 289.75 1844.72 00:09:53.151 [2024-11-18T19:11:05.159Z] =================================================================================================================== 00:09:53.151 [2024-11-18T19:11:05.159Z] Total : 200070.06 781.52 0.00 0.00 636.20 289.75 1844.72 00:09:53.151 6640.00 IOPS, 25.94 MiB/s 00:09:53.151 Latency(us) 00:09:53.151 [2024-11-18T19:11:05.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.151 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:53.151 Nvme1n1 : 1.01 6746.02 26.35 0.00 0.00 18923.25 3616.62 38447.79 00:09:53.151 [2024-11-18T19:11:05.159Z] =================================================================================================================== 00:09:53.151 [2024-11-18T19:11:05.159Z] Total : 6746.02 26.35 0.00 0.00 18923.25 3616.62 38447.79 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 138433 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 138436 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 138441 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.411 rmmod nvme_tcp 00:09:53.411 rmmod nvme_fabrics 00:09:53.411 rmmod nvme_keyring 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 138295 ']' 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 138295 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 138295 ']' 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 138295 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 138295 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 138295' 00:09:53.411 killing process with pid 138295 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 138295 00:09:53.411 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 138295 00:09:53.670 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.670 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.670 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.670 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:53.670 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:53.670 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.670 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.670 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.670 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:53.670 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.670 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.670 20:11:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.216 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:56.216 00:09:56.216 real 0m7.192s 00:09:56.216 user 0m15.283s 00:09:56.216 sys 0m3.437s 00:09:56.216 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.216 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.216 ************************************ 00:09:56.216 END TEST nvmf_bdev_io_wait 00:09:56.216 ************************************ 00:09:56.216 20:11:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:56.216 20:11:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.216 20:11:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.216 20:11:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.216 ************************************ 00:09:56.216 START TEST nvmf_queue_depth 00:09:56.216 ************************************ 00:09:56.216 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:56.216 * Looking for test storage... 00:09:56.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.217 --rc genhtml_branch_coverage=1 00:09:56.217 --rc genhtml_function_coverage=1 00:09:56.217 --rc genhtml_legend=1 00:09:56.217 --rc geninfo_all_blocks=1 00:09:56.217 --rc geninfo_unexecuted_blocks=1 00:09:56.217 00:09:56.217 ' 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.217 --rc genhtml_branch_coverage=1 00:09:56.217 --rc genhtml_function_coverage=1 00:09:56.217 --rc genhtml_legend=1 00:09:56.217 --rc geninfo_all_blocks=1 00:09:56.217 --rc geninfo_unexecuted_blocks=1 00:09:56.217 00:09:56.217 ' 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.217 --rc genhtml_branch_coverage=1 00:09:56.217 --rc genhtml_function_coverage=1 00:09:56.217 --rc genhtml_legend=1 00:09:56.217 --rc geninfo_all_blocks=1 00:09:56.217 --rc geninfo_unexecuted_blocks=1 00:09:56.217 00:09:56.217 ' 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.217 --rc genhtml_branch_coverage=1 00:09:56.217 --rc genhtml_function_coverage=1 00:09:56.217 --rc genhtml_legend=1 00:09:56.217 --rc geninfo_all_blocks=1 00:09:56.217 --rc geninfo_unexecuted_blocks=1 00:09:56.217 00:09:56.217 ' 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.217 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:56.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:56.218 20:11:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.126 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:58.127 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:58.127 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:58.127 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:58.127 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.127 20:11:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.127 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.127 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.127 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:58.127 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.127 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.127 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.127 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:58.127 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:58.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:09:58.127 00:09:58.127 --- 10.0.0.2 ping statistics --- 00:09:58.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.127 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:58.127 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:09:58.127 00:09:58.127 --- 10.0.0.1 ping statistics --- 00:09:58.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.128 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=140560 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 140560 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 140560 ']' 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.128 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.386 [2024-11-18 20:11:10.150116] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:58.386 [2024-11-18 20:11:10.150207] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.386 [2024-11-18 20:11:10.229212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.386 [2024-11-18 20:11:10.273517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.386 [2024-11-18 20:11:10.273575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.386 [2024-11-18 20:11:10.273611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.386 [2024-11-18 20:11:10.273622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.386 [2024-11-18 20:11:10.273631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.386 [2024-11-18 20:11:10.274257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.386 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.386 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:58.386 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:58.386 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:58.386 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.645 [2024-11-18 20:11:10.416834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.645 Malloc0 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.645 [2024-11-18 20:11:10.465199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=140700 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 140700 /var/tmp/bdevperf.sock 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 140700 ']' 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:58.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.645 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.645 [2024-11-18 20:11:10.515367] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:58.645 [2024-11-18 20:11:10.515456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140700 ] 00:09:58.645 [2024-11-18 20:11:10.585561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.645 [2024-11-18 20:11:10.633015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.904 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.904 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:58.904 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:58.904 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.904 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:59.162 NVMe0n1 00:09:59.162 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.162 20:11:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:59.162 Running I/O for 10 seconds... 00:10:01.487 8789.00 IOPS, 34.33 MiB/s [2024-11-18T19:11:14.442Z] 8873.00 IOPS, 34.66 MiB/s [2024-11-18T19:11:15.378Z] 8889.33 IOPS, 34.72 MiB/s [2024-11-18T19:11:16.332Z] 8948.75 IOPS, 34.96 MiB/s [2024-11-18T19:11:17.321Z] 9000.60 IOPS, 35.16 MiB/s [2024-11-18T19:11:18.335Z] 8952.50 IOPS, 34.97 MiB/s [2024-11-18T19:11:19.339Z] 8917.43 IOPS, 34.83 MiB/s [2024-11-18T19:11:20.332Z] 8952.25 IOPS, 34.97 MiB/s [2024-11-18T19:11:21.330Z] 8961.11 IOPS, 35.00 MiB/s [2024-11-18T19:11:21.330Z] 8946.10 IOPS, 34.95 MiB/s 00:10:09.322 Latency(us) 00:10:09.322 [2024-11-18T19:11:21.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.322 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:09.322 Verification LBA range: start 0x0 length 0x4000 00:10:09.322 NVMe0n1 : 10.07 8978.17 35.07 0.00 0.00 113519.93 18738.44 69905.07 00:10:09.322 [2024-11-18T19:11:21.330Z] =================================================================================================================== 00:10:09.322 [2024-11-18T19:11:21.330Z] Total : 8978.17 35.07 0.00 0.00 113519.93 18738.44 69905.07 00:10:09.322 { 00:10:09.322 "results": [ 00:10:09.322 { 00:10:09.322 "job": "NVMe0n1", 00:10:09.322 "core_mask": "0x1", 00:10:09.322 "workload": "verify", 00:10:09.322 "status": "finished", 00:10:09.322 "verify_range": { 00:10:09.322 "start": 0, 00:10:09.322 "length": 16384 00:10:09.322 }, 00:10:09.322 "queue_depth": 1024, 00:10:09.322 "io_size": 4096, 00:10:09.322 "runtime": 10.073099, 00:10:09.322 "iops": 8978.170471669146, 00:10:09.322 "mibps": 35.0709784049576, 00:10:09.322 "io_failed": 0, 00:10:09.322 "io_timeout": 0, 00:10:09.322 "avg_latency_us": 113519.93482385724, 00:10:09.322 "min_latency_us": 18738.44148148148, 00:10:09.322 "max_latency_us": 69905.06666666667 00:10:09.322 } 00:10:09.322 ], 00:10:09.322 "core_count": 1 00:10:09.322 } 00:10:09.322 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 140700 00:10:09.322 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 140700 ']' 00:10:09.322 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 140700 00:10:09.322 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:09.322 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.322 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 140700 00:10:09.322 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.322 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.322 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 140700' 00:10:09.322 killing process with pid 140700 00:10:09.322 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 140700 00:10:09.322 Received shutdown signal, test time was about 10.000000 seconds 00:10:09.322 00:10:09.322 Latency(us) 00:10:09.322 [2024-11-18T19:11:21.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.322 [2024-11-18T19:11:21.330Z] =================================================================================================================== 00:10:09.322 [2024-11-18T19:11:21.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:09.322 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 140700 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.619 rmmod nvme_tcp 00:10:09.619 rmmod nvme_fabrics 00:10:09.619 rmmod nvme_keyring 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 140560 ']' 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 140560 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 140560 ']' 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 140560 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 140560 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 140560' 00:10:09.619 killing process with pid 140560 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 140560 00:10:09.619 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 140560 00:10:09.914 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.914 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.914 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.914 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:09.914 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:09.914 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.914 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.914 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.914 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.914 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.914 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.914 20:11:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.986 00:10:11.986 real 0m16.134s 00:10:11.986 user 0m22.616s 00:10:11.986 sys 0m3.175s 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.986 ************************************ 00:10:11.986 END TEST nvmf_queue_depth 00:10:11.986 ************************************ 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.986 ************************************ 00:10:11.986 START TEST nvmf_target_multipath 00:10:11.986 ************************************ 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:11.986 * Looking for test storage... 00:10:11.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.986 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:12.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.296 --rc genhtml_branch_coverage=1 00:10:12.296 --rc genhtml_function_coverage=1 00:10:12.296 --rc genhtml_legend=1 00:10:12.296 --rc geninfo_all_blocks=1 00:10:12.296 --rc geninfo_unexecuted_blocks=1 00:10:12.296 00:10:12.296 ' 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:12.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.296 --rc genhtml_branch_coverage=1 00:10:12.296 --rc genhtml_function_coverage=1 00:10:12.296 --rc genhtml_legend=1 00:10:12.296 --rc geninfo_all_blocks=1 00:10:12.296 --rc geninfo_unexecuted_blocks=1 00:10:12.296 00:10:12.296 ' 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:12.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.296 --rc genhtml_branch_coverage=1 00:10:12.296 --rc genhtml_function_coverage=1 00:10:12.296 --rc genhtml_legend=1 00:10:12.296 --rc geninfo_all_blocks=1 00:10:12.296 --rc geninfo_unexecuted_blocks=1 00:10:12.296 00:10:12.296 ' 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:12.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.296 --rc genhtml_branch_coverage=1 00:10:12.296 --rc genhtml_function_coverage=1 00:10:12.296 --rc genhtml_legend=1 00:10:12.296 --rc geninfo_all_blocks=1 00:10:12.296 --rc geninfo_unexecuted_blocks=1 00:10:12.296 00:10:12.296 ' 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.296 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.297 20:11:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.297 20:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:12.297 20:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:12.297 20:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:12.297 20:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.303 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:14.304 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:14.304 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:14.304 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:14.304 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:14.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:10:14.304 00:10:14.304 --- 10.0.0.2 ping statistics --- 00:10:14.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.304 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:10:14.304 00:10:14.304 --- 10.0.0.1 ping statistics --- 00:10:14.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.304 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:14.304 only one NIC for nvmf test 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:14.304 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.305 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:14.305 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:14.305 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:14.305 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.305 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:14.305 rmmod nvme_tcp 00:10:14.305 rmmod nvme_fabrics 00:10:14.305 rmmod nvme_keyring 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.575 20:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:16.486 00:10:16.486 real 0m4.544s 00:10:16.486 user 0m0.935s 00:10:16.486 sys 0m1.612s 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:16.486 ************************************ 00:10:16.486 END TEST nvmf_target_multipath 00:10:16.486 ************************************ 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:16.486 ************************************ 00:10:16.486 START TEST nvmf_zcopy 00:10:16.486 ************************************ 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:16.486 * Looking for test storage... 00:10:16.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:16.486 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:16.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.747 --rc genhtml_branch_coverage=1 00:10:16.747 --rc genhtml_function_coverage=1 00:10:16.747 --rc genhtml_legend=1 00:10:16.747 --rc geninfo_all_blocks=1 00:10:16.747 --rc geninfo_unexecuted_blocks=1 00:10:16.747 00:10:16.747 ' 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:16.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.747 --rc genhtml_branch_coverage=1 00:10:16.747 --rc genhtml_function_coverage=1 00:10:16.747 --rc genhtml_legend=1 00:10:16.747 --rc geninfo_all_blocks=1 00:10:16.747 --rc geninfo_unexecuted_blocks=1 00:10:16.747 00:10:16.747 ' 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:16.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.747 --rc genhtml_branch_coverage=1 00:10:16.747 --rc genhtml_function_coverage=1 00:10:16.747 --rc genhtml_legend=1 00:10:16.747 --rc geninfo_all_blocks=1 00:10:16.747 --rc geninfo_unexecuted_blocks=1 00:10:16.747 00:10:16.747 ' 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:16.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.747 --rc genhtml_branch_coverage=1 00:10:16.747 --rc genhtml_function_coverage=1 00:10:16.747 --rc genhtml_legend=1 00:10:16.747 --rc geninfo_all_blocks=1 00:10:16.747 --rc geninfo_unexecuted_blocks=1 00:10:16.747 00:10:16.747 ' 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.747 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:16.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:16.748 20:11:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:19.284 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:19.285 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:19.285 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:19.285 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:19.285 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:19.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:10:19.285 00:10:19.285 --- 10.0.0.2 ping statistics --- 00:10:19.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.285 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:19.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:10:19.285 00:10:19.285 --- 10.0.0.1 ping statistics --- 00:10:19.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.285 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=145968 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 145968 00:10:19.285 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 145968 ']' 00:10:19.286 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.286 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.286 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.286 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.286 20:11:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.286 [2024-11-18 20:11:30.960059] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:10:19.286 [2024-11-18 20:11:30.960161] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.286 [2024-11-18 20:11:31.030123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.286 [2024-11-18 20:11:31.071465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.286 [2024-11-18 20:11:31.071526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.286 [2024-11-18 20:11:31.071550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.286 [2024-11-18 20:11:31.071560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.286 [2024-11-18 20:11:31.071569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.286 [2024-11-18 20:11:31.072220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.286 [2024-11-18 20:11:31.203117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.286 [2024-11-18 20:11:31.219339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.286 malloc0 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:19.286 { 00:10:19.286 "params": { 00:10:19.286 "name": "Nvme$subsystem", 00:10:19.286 "trtype": "$TEST_TRANSPORT", 00:10:19.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:19.286 "adrfam": "ipv4", 00:10:19.286 "trsvcid": "$NVMF_PORT", 00:10:19.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:19.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:19.286 "hdgst": ${hdgst:-false}, 00:10:19.286 "ddgst": ${ddgst:-false} 00:10:19.286 }, 00:10:19.286 "method": "bdev_nvme_attach_controller" 00:10:19.286 } 00:10:19.286 EOF 00:10:19.286 )") 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:19.286 20:11:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:19.286 "params": { 00:10:19.286 "name": "Nvme1", 00:10:19.286 "trtype": "tcp", 00:10:19.286 "traddr": "10.0.0.2", 00:10:19.286 "adrfam": "ipv4", 00:10:19.286 "trsvcid": "4420", 00:10:19.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:19.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:19.286 "hdgst": false, 00:10:19.286 "ddgst": false 00:10:19.286 }, 00:10:19.286 "method": "bdev_nvme_attach_controller" 00:10:19.286 }' 00:10:19.547 [2024-11-18 20:11:31.301538] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:10:19.547 [2024-11-18 20:11:31.301605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145992 ] 00:10:19.547 [2024-11-18 20:11:31.367689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.547 [2024-11-18 20:11:31.416217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.807 Running I/O for 10 seconds... 00:10:21.693 5636.00 IOPS, 44.03 MiB/s [2024-11-18T19:11:35.086Z] 5753.50 IOPS, 44.95 MiB/s [2024-11-18T19:11:36.036Z] 5806.00 IOPS, 45.36 MiB/s [2024-11-18T19:11:36.979Z] 5820.75 IOPS, 45.47 MiB/s [2024-11-18T19:11:37.921Z] 5817.40 IOPS, 45.45 MiB/s [2024-11-18T19:11:38.862Z] 5818.50 IOPS, 45.46 MiB/s [2024-11-18T19:11:39.805Z] 5829.57 IOPS, 45.54 MiB/s [2024-11-18T19:11:40.751Z] 5830.75 IOPS, 45.55 MiB/s [2024-11-18T19:11:41.694Z] 5830.78 IOPS, 45.55 MiB/s [2024-11-18T19:11:41.694Z] 5833.70 IOPS, 45.58 MiB/s 00:10:29.686 Latency(us) 00:10:29.686 [2024-11-18T19:11:41.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.686 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:29.686 Verification LBA range: start 0x0 length 0x1000 00:10:29.686 Nvme1n1 : 10.01 5834.46 45.58 0.00 0.00 21878.95 351.95 29709.65 00:10:29.686 [2024-11-18T19:11:41.694Z] =================================================================================================================== 00:10:29.686 [2024-11-18T19:11:41.694Z] Total : 5834.46 45.58 0.00 0.00 21878.95 351.95 29709.65 00:10:29.946 20:11:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=147187 00:10:29.946 20:11:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:29.946 20:11:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.946 20:11:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:29.946 20:11:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:29.946 20:11:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:29.946 20:11:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:29.946 20:11:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:29.946 20:11:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:29.946 { 00:10:29.946 "params": { 00:10:29.946 "name": "Nvme$subsystem", 00:10:29.946 "trtype": "$TEST_TRANSPORT", 00:10:29.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.946 "adrfam": "ipv4", 00:10:29.946 "trsvcid": "$NVMF_PORT", 00:10:29.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.946 "hdgst": ${hdgst:-false}, 00:10:29.946 "ddgst": ${ddgst:-false} 00:10:29.946 }, 00:10:29.946 "method": "bdev_nvme_attach_controller" 00:10:29.946 } 00:10:29.946 EOF 00:10:29.946 )") 00:10:29.946 20:11:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:29.946 [2024-11-18 20:11:41.884428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.946 [2024-11-18 20:11:41.884472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.946 20:11:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:29.946 20:11:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:29.946 20:11:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:29.946 "params": { 00:10:29.946 "name": "Nvme1", 00:10:29.946 "trtype": "tcp", 00:10:29.946 "traddr": "10.0.0.2", 00:10:29.946 "adrfam": "ipv4", 00:10:29.946 "trsvcid": "4420", 00:10:29.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:29.946 "hdgst": false, 00:10:29.946 "ddgst": false 00:10:29.946 }, 00:10:29.946 "method": "bdev_nvme_attach_controller" 00:10:29.946 }' 00:10:29.946 [2024-11-18 20:11:41.892374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.946 [2024-11-18 20:11:41.892397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.946 [2024-11-18 20:11:41.900397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.946 [2024-11-18 20:11:41.900419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.946 [2024-11-18 20:11:41.908416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.946 [2024-11-18 20:11:41.908436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.946 [2024-11-18 20:11:41.916441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.946 [2024-11-18 20:11:41.916463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.946 [2024-11-18 20:11:41.922340] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:10:29.946 [2024-11-18 20:11:41.922399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147187 ] 00:10:29.946 [2024-11-18 20:11:41.924460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.946 [2024-11-18 20:11:41.924480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.946 [2024-11-18 20:11:41.932484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.946 [2024-11-18 20:11:41.932505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.946 [2024-11-18 20:11:41.940503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.946 [2024-11-18 20:11:41.940524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.946 [2024-11-18 20:11:41.948556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.947 [2024-11-18 20:11:41.948583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:41.956548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:41.956569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:41.964567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:41.964587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:41.972588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:41.972608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:41.980611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:41.980654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:41.988656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:41.988677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:41.992051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.207 [2024-11-18 20:11:41.996690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:41.996715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.004752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.004794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.012726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.012751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.020745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.020770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.028764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.028787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.036786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.036810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.040484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.207 [2024-11-18 20:11:42.044809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.044831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.052836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.052866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.060898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.060958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.068918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.068970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.076956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.077009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.084990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.085033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.092998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.093037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.101031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.101069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.109014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.109035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.117060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.117099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.125085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.125124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.133099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.133139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.141079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.141099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.149101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.149122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.157130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.157155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.165154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.165183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.173176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.173198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.181198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.181220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.189217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.207 [2024-11-18 20:11:42.189240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.207 [2024-11-18 20:11:42.197240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.208 [2024-11-18 20:11:42.197277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.208 [2024-11-18 20:11:42.205263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.208 [2024-11-18 20:11:42.205282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.208 [2024-11-18 20:11:42.213300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.208 [2024-11-18 20:11:42.213321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.221320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.221340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.229347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.229371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.237370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.237393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.245389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.245410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.253413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.253433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.261436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.261456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.269458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.269478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.277483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.277504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.285506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.285528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.293526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.293547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.301551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.301573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.309573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.309594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.317598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.317633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.325646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.325670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.333669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.333691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.341691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.341713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.349723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.349745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.357729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.357751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.365750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.365772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.373771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.373792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.381799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.381825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 Running I/O for 5 seconds... 00:10:30.470 [2024-11-18 20:11:42.389822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.389847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.400990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.401018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.410860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.410889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.422025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.422068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.434857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.434887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.445139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.445168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.456754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.456789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.470 [2024-11-18 20:11:42.467668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.470 [2024-11-18 20:11:42.467706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.733 [2024-11-18 20:11:42.478712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.733 [2024-11-18 20:11:42.478743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.733 [2024-11-18 20:11:42.490067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.733 [2024-11-18 20:11:42.490097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.733 [2024-11-18 20:11:42.502622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.733 [2024-11-18 20:11:42.502662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.733 [2024-11-18 20:11:42.513012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.733 [2024-11-18 20:11:42.513039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.733 [2024-11-18 20:11:42.523838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.733 [2024-11-18 20:11:42.523867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.733 [2024-11-18 20:11:42.536411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.733 [2024-11-18 20:11:42.536439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.733 [2024-11-18 20:11:42.546868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.733 [2024-11-18 20:11:42.546897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.733 [2024-11-18 20:11:42.557565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.733 [2024-11-18 20:11:42.557593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.733 [2024-11-18 20:11:42.568758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.733 [2024-11-18 20:11:42.568787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.733 [2024-11-18 20:11:42.579761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.733 [2024-11-18 20:11:42.579789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.734 [2024-11-18 20:11:42.592561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.734 [2024-11-18 20:11:42.592589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.734 [2024-11-18 20:11:42.602857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.734 [2024-11-18 20:11:42.602886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.734 [2024-11-18 20:11:42.613326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.734 [2024-11-18 20:11:42.613354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.734 [2024-11-18 20:11:42.624360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.734 [2024-11-18 20:11:42.624387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.734 [2024-11-18 20:11:42.635140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.734 [2024-11-18 20:11:42.635168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.734 [2024-11-18 20:11:42.646018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.734 [2024-11-18 20:11:42.646046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.734 [2024-11-18 20:11:42.656677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.734 [2024-11-18 20:11:42.656706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.734 [2024-11-18 20:11:42.667319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.734 [2024-11-18 20:11:42.667347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.734 [2024-11-18 20:11:42.679655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.734 [2024-11-18 20:11:42.679696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.734 [2024-11-18 20:11:42.689871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.734 [2024-11-18 20:11:42.689900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.734 [2024-11-18 20:11:42.700719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.734 [2024-11-18 20:11:42.700759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.734 [2024-11-18 20:11:42.711663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.734 [2024-11-18 20:11:42.711693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.734 [2024-11-18 20:11:42.722336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.734 [2024-11-18 20:11:42.722363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.734 [2024-11-18 20:11:42.733352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.734 [2024-11-18 20:11:42.733380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.744961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.745004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.756002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.756029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.766772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.766800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.777650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.777678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.790410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.790437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.802148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.802175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.811949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.811991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.823135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.823161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.833972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.834013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.844664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.844692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.855361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.855388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.867562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.867589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.877970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.877997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.888970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.889012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.902421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.902449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.913071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.913104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.924119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.924146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.936594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.936643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.948421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.948447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.957555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.957581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.969127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.969153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.979821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.979849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:42.990442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:42.990468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.997 [2024-11-18 20:11:43.001289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.997 [2024-11-18 20:11:43.001315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.012342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.012370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.023400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.023428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.036395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.036423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.046865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.046894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.057434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.057461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.068231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.068258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.079129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.079157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.090128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.090156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.102866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.102894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.113129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.113164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.123847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.123882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.136415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.136442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.148335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.148362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.157361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.157388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.169179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.169206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.180151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.180193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.190911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.190953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.201750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.201778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.212689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.212717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.225788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.225817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.236050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.236092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.246752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.246780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.258 [2024-11-18 20:11:43.257931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.258 [2024-11-18 20:11:43.257970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.268722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.268751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.281360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.281387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.291248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.291274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.302504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.302530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.313562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.313589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.324887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.324934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.335728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.335764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.346927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.346969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.359886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.359930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.370538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.370565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.381439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.381467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.392364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.392392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 11606.00 IOPS, 90.67 MiB/s [2024-11-18T19:11:43.528Z] [2024-11-18 20:11:43.403320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.403348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.416074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.416102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.426597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.426646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.437767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.437796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.450250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.450277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.459442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.459483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.472790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.472818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.483393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.483435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.493876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.493905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.504896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.504941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.520 [2024-11-18 20:11:43.516277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.520 [2024-11-18 20:11:43.516304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.527325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.527370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.538541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.538568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.549505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.549532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.560389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.560416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.571651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.571680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.582194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.582221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.592965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.593007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.605675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.605704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.615427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.615455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.627201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.627228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.639954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.639981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.650549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.650575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.661586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.661628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.672729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.672758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.683925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.683954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.694831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.694860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.706043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.706070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.717310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.717338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.728608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.728659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.741602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.741631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.751736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.751765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.762895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.762924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.775848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.775876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.783 [2024-11-18 20:11:43.786262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.783 [2024-11-18 20:11:43.786290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.797514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.797543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.810019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.810047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.819435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.819463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.830855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.830884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.841477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.841505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.852557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.852585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.865303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.865330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.875708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.875737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.886352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.886380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.897554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.897582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.908678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.908708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.919722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.919750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.930745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.930773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.941433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.941460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.952162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.952190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.962706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.962734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.973269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.973296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.984255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.984282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:43.994947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:43.994975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:44.005829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:44.005858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:44.016469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:44.016496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:44.027428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:44.027455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:44.038614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:44.038667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.045 [2024-11-18 20:11:44.049435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.045 [2024-11-18 20:11:44.049463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.062161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.062189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.072584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.072612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.083591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.083632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.094496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.094523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.105087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.105114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.117820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.117848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.128047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.128074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.138836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.138865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.152853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.152882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.163345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.163372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.174230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.174264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.186822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.186850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.197066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.197093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.207598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.207649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.218199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.218226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.229273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.229301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.240400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.240427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.251499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.251526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.262234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.262261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.272862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.272891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.285444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.285472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.295710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.295739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.306 [2024-11-18 20:11:44.307176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.306 [2024-11-18 20:11:44.307203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.318325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.318353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.330789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.330818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.341008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.341035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.352008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.352050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.363016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.363044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.373852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.373881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.386512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.386547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 11638.50 IOPS, 90.93 MiB/s [2024-11-18T19:11:44.576Z] [2024-11-18 20:11:44.396818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.396847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.407580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.407607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.420342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.420369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.430325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.430352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.441030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.441057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.451871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.451899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.464502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.464529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.474304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.474331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.485076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.485103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.496244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.496271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.509175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.509201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.519421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.519448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.530827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.530856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.543389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.543417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.553542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.553570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.568 [2024-11-18 20:11:44.564422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.568 [2024-11-18 20:11:44.564449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.575267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-11-18 20:11:44.575295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.585946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-11-18 20:11:44.585974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.596609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-11-18 20:11:44.596678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.607535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-11-18 20:11:44.607563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.620510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-11-18 20:11:44.620537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.630864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-11-18 20:11:44.630892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.641977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-11-18 20:11:44.642018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.655350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-11-18 20:11:44.655377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.665987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-11-18 20:11:44.666014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.676421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-11-18 20:11:44.676448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.687373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-11-18 20:11:44.687401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.698188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-11-18 20:11:44.698215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.709040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-11-18 20:11:44.709068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.720352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-11-18 20:11:44.720380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-11-18 20:11:44.732916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.831 [2024-11-18 20:11:44.732946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.831 [2024-11-18 20:11:44.743380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.831 [2024-11-18 20:11:44.743407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.831 [2024-11-18 20:11:44.753822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.831 [2024-11-18 20:11:44.753852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.831 [2024-11-18 20:11:44.764495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.831 [2024-11-18 20:11:44.764522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.831 [2024-11-18 20:11:44.775593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.831 [2024-11-18 20:11:44.775634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.831 [2024-11-18 20:11:44.786541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.831 [2024-11-18 20:11:44.786568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.831 [2024-11-18 20:11:44.797580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.831 [2024-11-18 20:11:44.797608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.831 [2024-11-18 20:11:44.808725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.831 [2024-11-18 20:11:44.808755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.831 [2024-11-18 20:11:44.819337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.831 [2024-11-18 20:11:44.819364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.831 [2024-11-18 20:11:44.829855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.831 [2024-11-18 20:11:44.829885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.840760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.840790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.853452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.853479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.863083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.863110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.873886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.873914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.884705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.884735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.896081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.896108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.908744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.908773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.918609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.918660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.929591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.929634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.942162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.942191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.952271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.952301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.963445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.963472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.976272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.976299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.986551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.986577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:44.997537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:44.997564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:45.008392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:45.008419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:45.019398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:45.019425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:45.030512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:45.030539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:45.041538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:45.041564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:45.052955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:45.052998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:45.063945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:45.063973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:45.074820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:45.074848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:45.086112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:45.086139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.091 [2024-11-18 20:11:45.096917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.091 [2024-11-18 20:11:45.096959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.351 [2024-11-18 20:11:45.107997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.351 [2024-11-18 20:11:45.108024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.351 [2024-11-18 20:11:45.118646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.351 [2024-11-18 20:11:45.118674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.351 [2024-11-18 20:11:45.129407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.351 [2024-11-18 20:11:45.129434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.351 [2024-11-18 20:11:45.140242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.351 [2024-11-18 20:11:45.140269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.351 [2024-11-18 20:11:45.153585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.351 [2024-11-18 20:11:45.153611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.351 [2024-11-18 20:11:45.164015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.351 [2024-11-18 20:11:45.164041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.351 [2024-11-18 20:11:45.174348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.351 [2024-11-18 20:11:45.174375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.351 [2024-11-18 20:11:45.184721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.351 [2024-11-18 20:11:45.184750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.351 [2024-11-18 20:11:45.195350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.351 [2024-11-18 20:11:45.195378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.351 [2024-11-18 20:11:45.206239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.351 [2024-11-18 20:11:45.206266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.351 [2024-11-18 20:11:45.217267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.351 [2024-11-18 20:11:45.217294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.351 [2024-11-18 20:11:45.228105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.351 [2024-11-18 20:11:45.228133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.351 [2024-11-18 20:11:45.238726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.351 [2024-11-18 20:11:45.238755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.351 [2024-11-18 20:11:45.252008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.351 [2024-11-18 20:11:45.252035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.352 [2024-11-18 20:11:45.262579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.352 [2024-11-18 20:11:45.262606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.352 [2024-11-18 20:11:45.273238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.352 [2024-11-18 20:11:45.273265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.352 [2024-11-18 20:11:45.284016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.352 [2024-11-18 20:11:45.284043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.352 [2024-11-18 20:11:45.294755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.352 [2024-11-18 20:11:45.294783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.352 [2024-11-18 20:11:45.305411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.352 [2024-11-18 20:11:45.305438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.352 [2024-11-18 20:11:45.316290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.352 [2024-11-18 20:11:45.316317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.352 [2024-11-18 20:11:45.328931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.352 [2024-11-18 20:11:45.328959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.352 [2024-11-18 20:11:45.339230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.352 [2024-11-18 20:11:45.339257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.352 [2024-11-18 20:11:45.350233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.352 [2024-11-18 20:11:45.350260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 [2024-11-18 20:11:45.363352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.611 [2024-11-18 20:11:45.363379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 [2024-11-18 20:11:45.374185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.611 [2024-11-18 20:11:45.374214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 [2024-11-18 20:11:45.385023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.611 [2024-11-18 20:11:45.385050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 [2024-11-18 20:11:45.395970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.611 [2024-11-18 20:11:45.396011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 11663.67 IOPS, 91.12 MiB/s [2024-11-18T19:11:45.619Z] [2024-11-18 20:11:45.406303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.611 [2024-11-18 20:11:45.406329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 [2024-11-18 20:11:45.417078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.611 [2024-11-18 20:11:45.417104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 [2024-11-18 20:11:45.427935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.611 [2024-11-18 20:11:45.427970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 [2024-11-18 20:11:45.440627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.611 [2024-11-18 20:11:45.440664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 [2024-11-18 20:11:45.451027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.611 [2024-11-18 20:11:45.451054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 [2024-11-18 20:11:45.461662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.611 [2024-11-18 20:11:45.461704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 [2024-11-18 20:11:45.472724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.611 [2024-11-18 20:11:45.472753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 [2024-11-18 20:11:45.483951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.611 [2024-11-18 20:11:45.483979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 [2024-11-18 20:11:45.497066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.611 [2024-11-18 20:11:45.497093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 [2024-11-18 20:11:45.507305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.611 [2024-11-18 20:11:45.507332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.611 [2024-11-18 20:11:45.518190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.612 [2024-11-18 20:11:45.518216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.612 [2024-11-18 20:11:45.530263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.612 [2024-11-18 20:11:45.530290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.612 [2024-11-18 20:11:45.539599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.612 [2024-11-18 20:11:45.539649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.612 [2024-11-18 20:11:45.551047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.612 [2024-11-18 20:11:45.551074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.612 [2024-11-18 20:11:45.561584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.612 [2024-11-18 20:11:45.561611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.612 [2024-11-18 20:11:45.572457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.612 [2024-11-18 20:11:45.572484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.612 [2024-11-18 20:11:45.584991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.612 [2024-11-18 20:11:45.585018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.612 [2024-11-18 20:11:45.594970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.612 [2024-11-18 20:11:45.595012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.612 [2024-11-18 20:11:45.605512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.612 [2024-11-18 20:11:45.605539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.612 [2024-11-18 20:11:45.615945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.612 [2024-11-18 20:11:45.615974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.626728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.626756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.637524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.637557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.650229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.650256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.660569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.660595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.671002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.671029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.681890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.681934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.694826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.694855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.705222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.705249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.716030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.716057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.728565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.728592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.739478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.739505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.750021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.750048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.760948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.760975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.771688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.771727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.782813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.782842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.793520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.793547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.804740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.804769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.815297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.815323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.826304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.826331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.837013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.837040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.847478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.847517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.858560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.858587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.874 [2024-11-18 20:11:45.871330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.874 [2024-11-18 20:11:45.871357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:45.881397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:45.881425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:45.892394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:45.892420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:45.905012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:45.905038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:45.915673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:45.915702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:45.926722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:45.926750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:45.939394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:45.939436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:45.949720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:45.949748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:45.960561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:45.960587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:45.971505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:45.971532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:45.982348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:45.982375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:45.993852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:45.993881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:46.005200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:46.005228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:46.017850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:46.017879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:46.028479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:46.028506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:46.039461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:46.039487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:46.052454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:46.052481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:46.063087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:46.063122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:46.073874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:46.073929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:46.084802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:46.084832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:46.096286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.136 [2024-11-18 20:11:46.096312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.136 [2024-11-18 20:11:46.107207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.137 [2024-11-18 20:11:46.107234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.137 [2024-11-18 20:11:46.118283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.137 [2024-11-18 20:11:46.118310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.137 [2024-11-18 20:11:46.128596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.137 [2024-11-18 20:11:46.128646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.137 [2024-11-18 20:11:46.139219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.137 [2024-11-18 20:11:46.139246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.149364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.149392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.160420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.160447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.173316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.173343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.186385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.186427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.196710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.196739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.208049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.208076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.219047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.219075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.229534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.229561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.240246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.240273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.251053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.251080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.262071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.262098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.273136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.273162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.283932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.283959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.294734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.294762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.305591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.305631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.316462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.316489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.327489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.327516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.338240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.338267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.350739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.350767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.361022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.361049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.372027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.372054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.397 [2024-11-18 20:11:46.382744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.397 [2024-11-18 20:11:46.382773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.398 [2024-11-18 20:11:46.394315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.398 [2024-11-18 20:11:46.394342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 11680.75 IOPS, 91.26 MiB/s [2024-11-18T19:11:46.667Z] [2024-11-18 20:11:46.405561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.405588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.418234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.418260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.428503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.428530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.439749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.439777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.452246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.452272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.462550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.462577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.473259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.473285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.484029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.484056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.494931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.494959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.505951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.505979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.516886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.516915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.527588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.527615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.538333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.538361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.551167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.551194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.562698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.562727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.572699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.572728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.583345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.583372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.594081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.594108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.604798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.604827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.615295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.615322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.626147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.626174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.636865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.636893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.649479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.649506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-11-18 20:11:46.659936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-11-18 20:11:46.659963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.670548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.670576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.681349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.681384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.692501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.692528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.703893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.703938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.715206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.715233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.727696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.727724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.738142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.738169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.748720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.748749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.761125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.761152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.769970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.770012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.783023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.783050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.793389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.793415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.803834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.803862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.814430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.814457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.825345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.825372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.837980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.838006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.848101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.848127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.858892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.858936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.870241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.870269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.880935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.880962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.892032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.892067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.902484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.902511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.913162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.913189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.921 [2024-11-18 20:11:46.924225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.921 [2024-11-18 20:11:46.924253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:46.935052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:46.935079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:46.945448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:46.945474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:46.956159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:46.956186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:46.967055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:46.967081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:46.978006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:46.978033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:46.988976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:46.989003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:46.999583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:46.999609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.012418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.012445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.022795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.022823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.033442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.033469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.045651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.045680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.055563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.055591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.066770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.066799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.079394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.079421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.089761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.089789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.100229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.100264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.111663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.111704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.122264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.122291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.140322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.140351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.151064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.151091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.162118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.162159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.174678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.174721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.184 [2024-11-18 20:11:47.184349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.184 [2024-11-18 20:11:47.184377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.444 [2024-11-18 20:11:47.196069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.444 [2024-11-18 20:11:47.196097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.444 [2024-11-18 20:11:47.209109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.444 [2024-11-18 20:11:47.209136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.444 [2024-11-18 20:11:47.219673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.444 [2024-11-18 20:11:47.219716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.444 [2024-11-18 20:11:47.230496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.444 [2024-11-18 20:11:47.230523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.444 [2024-11-18 20:11:47.244174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.444 [2024-11-18 20:11:47.244201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.444 [2024-11-18 20:11:47.254745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.444 [2024-11-18 20:11:47.254773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.444 [2024-11-18 20:11:47.265055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.444 [2024-11-18 20:11:47.265082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.444 [2024-11-18 20:11:47.275553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.444 [2024-11-18 20:11:47.275580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.444 [2024-11-18 20:11:47.286298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.444 [2024-11-18 20:11:47.286325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.444 [2024-11-18 20:11:47.297192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.444 [2024-11-18 20:11:47.297219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.444 [2024-11-18 20:11:47.308349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.444 [2024-11-18 20:11:47.308376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.444 [2024-11-18 20:11:47.319565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.444 [2024-11-18 20:11:47.319602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.444 [2024-11-18 20:11:47.330695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.444 [2024-11-18 20:11:47.330725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.444 [2024-11-18 20:11:47.342165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.445 [2024-11-18 20:11:47.342192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.445 [2024-11-18 20:11:47.352752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.445 [2024-11-18 20:11:47.352780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.445 [2024-11-18 20:11:47.363593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.445 [2024-11-18 20:11:47.363619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.445 [2024-11-18 20:11:47.374092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.445 [2024-11-18 20:11:47.374119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.445 [2024-11-18 20:11:47.387654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.445 [2024-11-18 20:11:47.387682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.445 [2024-11-18 20:11:47.398139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.445 [2024-11-18 20:11:47.398165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.445 11698.20 IOPS, 91.39 MiB/s [2024-11-18T19:11:47.453Z] [2024-11-18 20:11:47.407727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.445 [2024-11-18 20:11:47.407755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.445 00:10:35.445 Latency(us) 00:10:35.445 [2024-11-18T19:11:47.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.445 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:35.445 Nvme1n1 : 5.01 11699.24 91.40 0.00 0.00 10926.53 4369.07 18932.62 00:10:35.445 [2024-11-18T19:11:47.453Z] =================================================================================================================== 00:10:35.445 [2024-11-18T19:11:47.453Z] Total : 11699.24 91.40 0.00 0.00 10926.53 4369.07 18932.62 00:10:35.445 [2024-11-18 20:11:47.412925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.445 [2024-11-18 20:11:47.412951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.445 [2024-11-18 20:11:47.420936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.445 [2024-11-18 20:11:47.420960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.445 [2024-11-18 20:11:47.428972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.445 [2024-11-18 20:11:47.429008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.445 [2024-11-18 20:11:47.437066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.445 [2024-11-18 20:11:47.437117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.445 [2024-11-18 20:11:47.445059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.445 [2024-11-18 20:11:47.445110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.453068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.453112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.461098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.461143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.469118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.469163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.477146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.477192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.485159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.485203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.493180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.493226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.501204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.501251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.509222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.509270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.517256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.517302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.525279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.525324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.533287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.533333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.541310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.541355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.549332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.549375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.557331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.557364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.565346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.565381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.573410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.573456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.581420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.581464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.589385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.589411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.597395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.597416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 [2024-11-18 20:11:47.605421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.705 [2024-11-18 20:11:47.605443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (147187) - No such process 00:10:35.705 20:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 147187 00:10:35.705 20:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.705 20:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.705 20:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.705 20:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.705 20:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:35.705 20:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.705 20:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.705 delay0 00:10:35.705 20:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.705 20:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:35.705 20:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.705 20:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.705 20:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.705 20:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:35.968 [2024-11-18 20:11:47.734471] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:42.563 Initializing NVMe Controllers 00:10:42.563 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:42.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:42.563 Initialization complete. Launching workers. 00:10:42.563 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 96 00:10:42.563 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 383, failed to submit 33 00:10:42.563 success 236, unsuccessful 147, failed 0 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:42.563 rmmod nvme_tcp 00:10:42.563 rmmod nvme_fabrics 00:10:42.563 rmmod nvme_keyring 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 145968 ']' 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 145968 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 145968 ']' 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 145968 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 145968 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 145968' 00:10:42.563 killing process with pid 145968 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 145968 00:10:42.563 20:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 145968 00:10:42.563 20:11:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.563 20:11:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.563 20:11:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.563 20:11:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:42.563 20:11:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:42.563 20:11:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.563 20:11:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.563 20:11:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.563 20:11:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.563 20:11:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.563 20:11:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.563 20:11:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.475 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.475 00:10:44.475 real 0m27.767s 00:10:44.475 user 0m41.760s 00:10:44.475 sys 0m7.454s 00:10:44.475 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.475 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.475 ************************************ 00:10:44.475 END TEST nvmf_zcopy 00:10:44.475 ************************************ 00:10:44.475 20:11:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:44.475 20:11:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.475 20:11:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.476 ************************************ 00:10:44.476 START TEST nvmf_nmic 00:10:44.476 ************************************ 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:44.476 * Looking for test storage... 00:10:44.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:44.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.476 --rc genhtml_branch_coverage=1 00:10:44.476 --rc genhtml_function_coverage=1 00:10:44.476 --rc genhtml_legend=1 00:10:44.476 --rc geninfo_all_blocks=1 00:10:44.476 --rc geninfo_unexecuted_blocks=1 00:10:44.476 00:10:44.476 ' 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:44.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.476 --rc genhtml_branch_coverage=1 00:10:44.476 --rc genhtml_function_coverage=1 00:10:44.476 --rc genhtml_legend=1 00:10:44.476 --rc geninfo_all_blocks=1 00:10:44.476 --rc geninfo_unexecuted_blocks=1 00:10:44.476 00:10:44.476 ' 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:44.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.476 --rc genhtml_branch_coverage=1 00:10:44.476 --rc genhtml_function_coverage=1 00:10:44.476 --rc genhtml_legend=1 00:10:44.476 --rc geninfo_all_blocks=1 00:10:44.476 --rc geninfo_unexecuted_blocks=1 00:10:44.476 00:10:44.476 ' 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:44.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.476 --rc genhtml_branch_coverage=1 00:10:44.476 --rc genhtml_function_coverage=1 00:10:44.476 --rc genhtml_legend=1 00:10:44.476 --rc geninfo_all_blocks=1 00:10:44.476 --rc geninfo_unexecuted_blocks=1 00:10:44.476 00:10:44.476 ' 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.476 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.477 20:11:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.020 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.020 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:47.020 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:47.020 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:47.020 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:47.020 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:47.020 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:47.020 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:47.020 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:47.021 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:47.021 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:47.021 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:47.021 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:47.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:10:47.021 00:10:47.021 --- 10.0.0.2 ping statistics --- 00:10:47.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.021 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:10:47.021 00:10:47.021 --- 10.0.0.1 ping statistics --- 00:10:47.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.021 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.021 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=150589 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 150589 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 150589 ']' 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.022 20:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.022 [2024-11-18 20:11:58.851564] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:10:47.022 [2024-11-18 20:11:58.851674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.022 [2024-11-18 20:11:58.924962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.022 [2024-11-18 20:11:58.974668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.022 [2024-11-18 20:11:58.974719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.022 [2024-11-18 20:11:58.974733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.022 [2024-11-18 20:11:58.974745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.022 [2024-11-18 20:11:58.974755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.022 [2024-11-18 20:11:58.976462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.022 [2024-11-18 20:11:58.976530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.022 [2024-11-18 20:11:58.976594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.022 [2024-11-18 20:11:58.976597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.282 [2024-11-18 20:11:59.124973] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.282 Malloc0 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.282 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.283 [2024-11-18 20:11:59.192565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:47.283 test case1: single bdev can't be used in multiple subsystems 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.283 [2024-11-18 20:11:59.216401] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:47.283 [2024-11-18 20:11:59.216431] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:47.283 [2024-11-18 20:11:59.216447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 request: 00:10:47.283 { 00:10:47.283 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:47.283 "namespace": { 00:10:47.283 "bdev_name": "Malloc0", 00:10:47.283 "no_auto_visible": false 00:10:47.283 }, 00:10:47.283 "method": "nvmf_subsystem_add_ns", 00:10:47.283 "req_id": 1 00:10:47.283 } 00:10:47.283 Got JSON-RPC error response 00:10:47.283 response: 00:10:47.283 { 00:10:47.283 "code": -32602, 00:10:47.283 "message": "Invalid parameters" 00:10:47.283 } 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:47.283 Adding namespace failed - expected result. 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:47.283 test case2: host connect to nvmf target in multiple paths 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.283 [2024-11-18 20:11:59.224523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.283 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.226 20:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:48.796 20:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:48.796 20:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:48.796 20:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.796 20:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:48.796 20:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:50.708 20:12:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:50.708 20:12:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:50.708 20:12:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.708 20:12:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:50.708 20:12:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.708 20:12:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:50.708 20:12:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:50.708 [global] 00:10:50.708 thread=1 00:10:50.708 invalidate=1 00:10:50.708 rw=write 00:10:50.708 time_based=1 00:10:50.708 runtime=1 00:10:50.708 ioengine=libaio 00:10:50.708 direct=1 00:10:50.708 bs=4096 00:10:50.708 iodepth=1 00:10:50.708 norandommap=0 00:10:50.708 numjobs=1 00:10:50.708 00:10:50.708 verify_dump=1 00:10:50.708 verify_backlog=512 00:10:50.708 verify_state_save=0 00:10:50.708 do_verify=1 00:10:50.708 verify=crc32c-intel 00:10:50.708 [job0] 00:10:50.708 filename=/dev/nvme0n1 00:10:50.708 Could not set queue depth (nvme0n1) 00:10:51.285 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.285 fio-3.35 00:10:51.285 Starting 1 thread 00:10:52.226 00:10:52.226 job0: (groupid=0, jobs=1): err= 0: pid=151341: Mon Nov 18 20:12:04 2024 00:10:52.226 read: IOPS=1275, BW=5103KiB/s (5225kB/s)(5108KiB/1001msec) 00:10:52.226 slat (nsec): min=6646, max=33950, avg=7679.24, stdev=2138.18 00:10:52.226 clat (usec): min=190, max=42022, avg=525.48, stdev=3474.84 00:10:52.226 lat (usec): min=197, max=42040, avg=533.16, stdev=3476.58 00:10:52.226 clat percentiles (usec): 00:10:52.226 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:10:52.226 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 237], 00:10:52.226 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 251], 00:10:52.226 | 99.00th=[ 265], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:10:52.226 | 99.99th=[42206] 00:10:52.226 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:52.226 slat (usec): min=8, max=29581, avg=32.13, stdev=754.47 00:10:52.226 clat (usec): min=142, max=345, avg=170.10, stdev=16.61 00:10:52.226 lat (usec): min=151, max=29831, avg=202.23, stdev=756.79 00:10:52.226 clat percentiles (usec): 00:10:52.226 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:10:52.226 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:10:52.226 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 200], 00:10:52.226 | 99.00th=[ 217], 99.50th=[ 223], 99.90th=[ 265], 99.95th=[ 347], 00:10:52.226 | 99.99th=[ 347] 00:10:52.226 bw ( KiB/s): min= 4096, max= 4096, per=66.73%, avg=4096.00, stdev= 0.00, samples=1 00:10:52.226 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:52.226 lat (usec) : 250=97.26%, 500=2.38%, 1000=0.04% 00:10:52.226 lat (msec) : 50=0.32% 00:10:52.226 cpu : usr=2.20%, sys=4.00%, ctx=2815, majf=0, minf=1 00:10:52.226 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.226 issued rwts: total=1277,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.226 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.226 00:10:52.226 Run status group 0 (all jobs): 00:10:52.226 READ: bw=5103KiB/s (5225kB/s), 5103KiB/s-5103KiB/s (5225kB/s-5225kB/s), io=5108KiB (5231kB), run=1001-1001msec 00:10:52.226 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:10:52.226 00:10:52.226 Disk stats (read/write): 00:10:52.226 nvme0n1: ios=1049/1304, merge=0/0, ticks=1556/215, in_queue=1771, util=98.60% 00:10:52.226 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.489 rmmod nvme_tcp 00:10:52.489 rmmod nvme_fabrics 00:10:52.489 rmmod nvme_keyring 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 150589 ']' 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 150589 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 150589 ']' 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 150589 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 150589 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 150589' 00:10:52.489 killing process with pid 150589 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 150589 00:10:52.489 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 150589 00:10:52.751 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.751 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:52.751 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:52.751 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:52.751 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:52.751 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:52.751 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:52.751 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:52.751 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:52.751 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.751 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.751 20:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.665 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:54.665 00:10:54.665 real 0m10.400s 00:10:54.665 user 0m23.375s 00:10:54.665 sys 0m2.866s 00:10:54.665 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.665 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.665 ************************************ 00:10:54.665 END TEST nvmf_nmic 00:10:54.665 ************************************ 00:10:54.925 20:12:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:54.925 20:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:54.925 20:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.925 20:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.925 ************************************ 00:10:54.925 START TEST nvmf_fio_target 00:10:54.925 ************************************ 00:10:54.925 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:54.925 * Looking for test storage... 00:10:54.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:54.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.926 --rc genhtml_branch_coverage=1 00:10:54.926 --rc genhtml_function_coverage=1 00:10:54.926 --rc genhtml_legend=1 00:10:54.926 --rc geninfo_all_blocks=1 00:10:54.926 --rc geninfo_unexecuted_blocks=1 00:10:54.926 00:10:54.926 ' 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:54.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.926 --rc genhtml_branch_coverage=1 00:10:54.926 --rc genhtml_function_coverage=1 00:10:54.926 --rc genhtml_legend=1 00:10:54.926 --rc geninfo_all_blocks=1 00:10:54.926 --rc geninfo_unexecuted_blocks=1 00:10:54.926 00:10:54.926 ' 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:54.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.926 --rc genhtml_branch_coverage=1 00:10:54.926 --rc genhtml_function_coverage=1 00:10:54.926 --rc genhtml_legend=1 00:10:54.926 --rc geninfo_all_blocks=1 00:10:54.926 --rc geninfo_unexecuted_blocks=1 00:10:54.926 00:10:54.926 ' 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:54.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.926 --rc genhtml_branch_coverage=1 00:10:54.926 --rc genhtml_function_coverage=1 00:10:54.926 --rc genhtml_legend=1 00:10:54.926 --rc geninfo_all_blocks=1 00:10:54.926 --rc geninfo_unexecuted_blocks=1 00:10:54.926 00:10:54.926 ' 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.926 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:54.927 20:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.464 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:57.465 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:57.465 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:57.465 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:57.465 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.465 20:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:10:57.465 00:10:57.465 --- 10.0.0.2 ping statistics --- 00:10:57.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.465 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:10:57.465 00:10:57.465 --- 10.0.0.1 ping statistics --- 00:10:57.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.465 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=153929 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 153929 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 153929 ']' 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.465 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.466 [2024-11-18 20:12:09.182705] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:10:57.466 [2024-11-18 20:12:09.182768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.466 [2024-11-18 20:12:09.250964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.466 [2024-11-18 20:12:09.302204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.466 [2024-11-18 20:12:09.302251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.466 [2024-11-18 20:12:09.302273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.466 [2024-11-18 20:12:09.302299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.466 [2024-11-18 20:12:09.302316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.466 [2024-11-18 20:12:09.304150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.466 [2024-11-18 20:12:09.304222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.466 [2024-11-18 20:12:09.304282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.466 [2024-11-18 20:12:09.304286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.466 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.466 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:57.466 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:57.466 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:57.466 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.466 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.466 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:57.724 [2024-11-18 20:12:09.683334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.724 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.295 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:58.295 20:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.554 20:12:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:58.554 20:12:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.813 20:12:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:58.813 20:12:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.072 20:12:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:59.072 20:12:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:59.330 20:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.589 20:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:59.589 20:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.847 20:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:59.847 20:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.104 20:12:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:00.104 20:12:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:00.364 20:12:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.623 20:12:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:00.623 20:12:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.881 20:12:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:00.881 20:12:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:01.140 20:12:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.398 [2024-11-18 20:12:13.354388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.398 20:12:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:01.657 20:12:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:02.227 20:12:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.796 20:12:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:02.796 20:12:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:02.797 20:12:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.797 20:12:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:02.797 20:12:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:02.797 20:12:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:04.708 20:12:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:04.708 20:12:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:04.708 20:12:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.708 20:12:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:04.708 20:12:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.708 20:12:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:04.708 20:12:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:04.708 [global] 00:11:04.708 thread=1 00:11:04.708 invalidate=1 00:11:04.708 rw=write 00:11:04.708 time_based=1 00:11:04.708 runtime=1 00:11:04.708 ioengine=libaio 00:11:04.708 direct=1 00:11:04.708 bs=4096 00:11:04.708 iodepth=1 00:11:04.708 norandommap=0 00:11:04.708 numjobs=1 00:11:04.708 00:11:04.708 verify_dump=1 00:11:04.708 verify_backlog=512 00:11:04.708 verify_state_save=0 00:11:04.708 do_verify=1 00:11:04.708 verify=crc32c-intel 00:11:04.708 [job0] 00:11:04.708 filename=/dev/nvme0n1 00:11:04.708 [job1] 00:11:04.708 filename=/dev/nvme0n2 00:11:04.708 [job2] 00:11:04.708 filename=/dev/nvme0n3 00:11:04.708 [job3] 00:11:04.708 filename=/dev/nvme0n4 00:11:04.708 Could not set queue depth (nvme0n1) 00:11:04.708 Could not set queue depth (nvme0n2) 00:11:04.708 Could not set queue depth (nvme0n3) 00:11:04.708 Could not set queue depth (nvme0n4) 00:11:04.967 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.967 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.967 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.967 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.967 fio-3.35 00:11:04.967 Starting 4 threads 00:11:06.351 00:11:06.351 job0: (groupid=0, jobs=1): err= 0: pid=155009: Mon Nov 18 20:12:18 2024 00:11:06.351 read: IOPS=2031, BW=8128KiB/s (8323kB/s)(8144KiB/1002msec) 00:11:06.351 slat (nsec): min=4662, max=57868, avg=13254.86, stdev=9004.13 00:11:06.351 clat (usec): min=179, max=870, avg=253.11, stdev=64.38 00:11:06.351 lat (usec): min=187, max=875, avg=266.36, stdev=70.19 00:11:06.351 clat percentiles (usec): 00:11:06.351 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 212], 00:11:06.351 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 235], 00:11:06.351 | 70.00th=[ 249], 80.00th=[ 297], 90.00th=[ 355], 95.00th=[ 379], 00:11:06.351 | 99.00th=[ 469], 99.50th=[ 486], 99.90th=[ 619], 99.95th=[ 685], 00:11:06.351 | 99.99th=[ 873] 00:11:06.351 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:11:06.351 slat (nsec): min=6130, max=64685, avg=14806.22, stdev=6277.39 00:11:06.351 clat (usec): min=135, max=829, avg=200.72, stdev=49.76 00:11:06.351 lat (usec): min=144, max=836, avg=215.53, stdev=49.50 00:11:06.351 clat percentiles (usec): 00:11:06.351 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 161], 00:11:06.351 | 30.00th=[ 167], 40.00th=[ 176], 50.00th=[ 186], 60.00th=[ 198], 00:11:06.351 | 70.00th=[ 221], 80.00th=[ 245], 90.00th=[ 273], 95.00th=[ 285], 00:11:06.351 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 635], 99.95th=[ 644], 00:11:06.351 | 99.99th=[ 832] 00:11:06.351 bw ( KiB/s): min= 9128, max= 9128, per=41.29%, avg=9128.00, stdev= 0.00, samples=1 00:11:06.351 iops : min= 2282, max= 2282, avg=2282.00, stdev= 0.00, samples=1 00:11:06.351 lat (usec) : 250=76.30%, 500=23.43%, 750=0.22%, 1000=0.05% 00:11:06.351 cpu : usr=3.20%, sys=5.89%, ctx=4088, majf=0, minf=1 00:11:06.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.351 issued rwts: total=2036,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.352 job1: (groupid=0, jobs=1): err= 0: pid=155010: Mon Nov 18 20:12:18 2024 00:11:06.352 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:11:06.352 slat (nsec): min=8777, max=44646, avg=23323.14, stdev=10150.93 00:11:06.352 clat (usec): min=331, max=41064, avg=39100.57, stdev=8659.94 00:11:06.352 lat (usec): min=340, max=41082, avg=39123.89, stdev=8663.18 00:11:06.352 clat percentiles (usec): 00:11:06.352 | 1.00th=[ 330], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:06.352 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:06.352 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:06.352 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:06.352 | 99.99th=[41157] 00:11:06.352 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:06.352 slat (usec): min=6, max=671, avg=15.17, stdev=29.84 00:11:06.352 clat (usec): min=142, max=526, avg=253.81, stdev=56.80 00:11:06.352 lat (usec): min=155, max=1047, avg=268.99, stdev=65.76 00:11:06.352 clat percentiles (usec): 00:11:06.352 | 1.00th=[ 151], 5.00th=[ 194], 10.00th=[ 210], 20.00th=[ 221], 00:11:06.352 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:11:06.352 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 326], 95.00th=[ 392], 00:11:06.352 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 529], 99.95th=[ 529], 00:11:06.352 | 99.99th=[ 529] 00:11:06.352 bw ( KiB/s): min= 4096, max= 4096, per=18.53%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.352 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.352 lat (usec) : 250=61.80%, 500=34.08%, 750=0.19% 00:11:06.352 lat (msec) : 50=3.93% 00:11:06.352 cpu : usr=0.30%, sys=0.70%, ctx=538, majf=0, minf=1 00:11:06.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.352 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.352 job2: (groupid=0, jobs=1): err= 0: pid=155013: Mon Nov 18 20:12:18 2024 00:11:06.352 read: IOPS=1060, BW=4244KiB/s (4346kB/s)(4248KiB/1001msec) 00:11:06.352 slat (nsec): min=5646, max=62415, avg=11811.27, stdev=7090.22 00:11:06.352 clat (usec): min=204, max=41253, avg=562.74, stdev=3280.70 00:11:06.352 lat (usec): min=212, max=41269, avg=574.55, stdev=3281.02 00:11:06.352 clat percentiles (usec): 00:11:06.352 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 239], 00:11:06.352 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 277], 00:11:06.352 | 70.00th=[ 297], 80.00th=[ 367], 90.00th=[ 441], 95.00th=[ 486], 00:11:06.352 | 99.00th=[ 529], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:11:06.352 | 99.99th=[41157] 00:11:06.352 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:06.352 slat (nsec): min=6992, max=84206, avg=16121.48, stdev=7382.99 00:11:06.352 clat (usec): min=166, max=451, avg=231.05, stdev=34.40 00:11:06.352 lat (usec): min=182, max=462, avg=247.17, stdev=32.33 00:11:06.352 clat percentiles (usec): 00:11:06.352 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 200], 00:11:06.352 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 237], 00:11:06.352 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 273], 95.00th=[ 289], 00:11:06.352 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 388], 99.95th=[ 453], 00:11:06.352 | 99.99th=[ 453] 00:11:06.352 bw ( KiB/s): min= 4982, max= 4982, per=22.53%, avg=4982.00, stdev= 0.00, samples=1 00:11:06.352 iops : min= 1245, max= 1245, avg=1245.00, stdev= 0.00, samples=1 00:11:06.352 lat (usec) : 250=59.89%, 500=39.07%, 750=0.77% 00:11:06.352 lat (msec) : 50=0.27% 00:11:06.352 cpu : usr=2.70%, sys=5.00%, ctx=2599, majf=0, minf=1 00:11:06.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.352 issued rwts: total=1062,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.352 job3: (groupid=0, jobs=1): err= 0: pid=155014: Mon Nov 18 20:12:18 2024 00:11:06.352 read: IOPS=1339, BW=5358KiB/s (5487kB/s)(5460KiB/1019msec) 00:11:06.352 slat (nsec): min=4478, max=57722, avg=14483.57, stdev=9540.94 00:11:06.352 clat (usec): min=189, max=41005, avg=502.13, stdev=2978.02 00:11:06.352 lat (usec): min=195, max=41021, avg=516.62, stdev=2978.51 00:11:06.352 clat percentiles (usec): 00:11:06.352 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:11:06.352 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 265], 00:11:06.352 | 70.00th=[ 293], 80.00th=[ 347], 90.00th=[ 371], 95.00th=[ 396], 00:11:06.352 | 99.00th=[ 537], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:11:06.352 | 99.99th=[41157] 00:11:06.352 write: IOPS=1507, BW=6029KiB/s (6174kB/s)(6144KiB/1019msec); 0 zone resets 00:11:06.352 slat (nsec): min=5869, max=82169, avg=14914.99, stdev=5168.01 00:11:06.352 clat (usec): min=147, max=447, avg=181.05, stdev=33.44 00:11:06.352 lat (usec): min=158, max=460, avg=195.97, stdev=32.96 00:11:06.352 clat percentiles (usec): 00:11:06.352 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 161], 00:11:06.352 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:11:06.352 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 208], 95.00th=[ 223], 00:11:06.352 | 99.00th=[ 347], 99.50th=[ 367], 99.90th=[ 388], 99.95th=[ 449], 00:11:06.352 | 99.99th=[ 449] 00:11:06.352 bw ( KiB/s): min= 4096, max= 8192, per=27.79%, avg=6144.00, stdev=2896.31, samples=2 00:11:06.352 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:06.352 lat (usec) : 250=76.25%, 500=22.82%, 750=0.62%, 1000=0.03% 00:11:06.352 lat (msec) : 50=0.28% 00:11:06.352 cpu : usr=1.96%, sys=4.72%, ctx=2902, majf=0, minf=1 00:11:06.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.352 issued rwts: total=1365,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.352 00:11:06.352 Run status group 0 (all jobs): 00:11:06.352 READ: bw=17.2MiB/s (18.0MB/s), 87.9KiB/s-8128KiB/s (90.0kB/s-8323kB/s), io=17.5MiB (18.4MB), run=1001-1019msec 00:11:06.352 WRITE: bw=21.6MiB/s (22.6MB/s), 2046KiB/s-8176KiB/s (2095kB/s-8372kB/s), io=22.0MiB (23.1MB), run=1001-1019msec 00:11:06.352 00:11:06.352 Disk stats (read/write): 00:11:06.352 nvme0n1: ios=1666/2048, merge=0/0, ticks=600/393, in_queue=993, util=85.17% 00:11:06.352 nvme0n2: ios=67/512, merge=0/0, ticks=804/125, in_queue=929, util=89.22% 00:11:06.352 nvme0n3: ios=1010/1024, merge=0/0, ticks=626/230, in_queue=856, util=94.65% 00:11:06.352 nvme0n4: ios=1172/1536, merge=0/0, ticks=583/260, in_queue=843, util=95.46% 00:11:06.352 20:12:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:06.352 [global] 00:11:06.352 thread=1 00:11:06.352 invalidate=1 00:11:06.352 rw=randwrite 00:11:06.352 time_based=1 00:11:06.352 runtime=1 00:11:06.352 ioengine=libaio 00:11:06.352 direct=1 00:11:06.352 bs=4096 00:11:06.352 iodepth=1 00:11:06.352 norandommap=0 00:11:06.352 numjobs=1 00:11:06.352 00:11:06.352 verify_dump=1 00:11:06.352 verify_backlog=512 00:11:06.352 verify_state_save=0 00:11:06.352 do_verify=1 00:11:06.352 verify=crc32c-intel 00:11:06.352 [job0] 00:11:06.352 filename=/dev/nvme0n1 00:11:06.352 [job1] 00:11:06.352 filename=/dev/nvme0n2 00:11:06.352 [job2] 00:11:06.352 filename=/dev/nvme0n3 00:11:06.352 [job3] 00:11:06.352 filename=/dev/nvme0n4 00:11:06.352 Could not set queue depth (nvme0n1) 00:11:06.352 Could not set queue depth (nvme0n2) 00:11:06.352 Could not set queue depth (nvme0n3) 00:11:06.352 Could not set queue depth (nvme0n4) 00:11:06.352 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.352 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.352 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.352 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.352 fio-3.35 00:11:06.352 Starting 4 threads 00:11:07.738 00:11:07.738 job0: (groupid=0, jobs=1): err= 0: pid=155277: Mon Nov 18 20:12:19 2024 00:11:07.738 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:07.738 slat (nsec): min=6166, max=51663, avg=12940.52, stdev=8924.44 00:11:07.738 clat (usec): min=178, max=41978, avg=1615.78, stdev=7348.44 00:11:07.739 lat (usec): min=185, max=42012, avg=1628.72, stdev=7350.48 00:11:07.739 clat percentiles (usec): 00:11:07.739 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:11:07.739 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 231], 00:11:07.739 | 70.00th=[ 245], 80.00th=[ 269], 90.00th=[ 465], 95.00th=[ 494], 00:11:07.739 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.739 | 99.99th=[42206] 00:11:07.739 write: IOPS=567, BW=2270KiB/s (2324kB/s)(2272KiB/1001msec); 0 zone resets 00:11:07.739 slat (nsec): min=7213, max=66347, avg=18052.33, stdev=9648.79 00:11:07.739 clat (usec): min=139, max=1159, avg=265.43, stdev=76.40 00:11:07.739 lat (usec): min=148, max=1188, avg=283.48, stdev=77.35 00:11:07.739 clat percentiles (usec): 00:11:07.739 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 182], 20.00th=[ 208], 00:11:07.739 | 30.00th=[ 229], 40.00th=[ 243], 50.00th=[ 260], 60.00th=[ 273], 00:11:07.739 | 70.00th=[ 289], 80.00th=[ 310], 90.00th=[ 363], 95.00th=[ 400], 00:11:07.739 | 99.00th=[ 429], 99.50th=[ 478], 99.90th=[ 1156], 99.95th=[ 1156], 00:11:07.739 | 99.99th=[ 1156] 00:11:07.739 bw ( KiB/s): min= 4096, max= 4096, per=47.11%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.739 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.739 lat (usec) : 250=59.26%, 500=38.33%, 750=0.65%, 1000=0.09% 00:11:07.739 lat (msec) : 2=0.09%, 50=1.57% 00:11:07.739 cpu : usr=0.90%, sys=2.20%, ctx=1082, majf=0, minf=1 00:11:07.739 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.739 issued rwts: total=512,568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.739 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.739 job1: (groupid=0, jobs=1): err= 0: pid=155296: Mon Nov 18 20:12:19 2024 00:11:07.739 read: IOPS=22, BW=89.2KiB/s (91.4kB/s)(92.0KiB/1031msec) 00:11:07.739 slat (nsec): min=9150, max=35540, avg=20935.96, stdev=8994.78 00:11:07.739 clat (usec): min=221, max=42260, avg=39590.70, stdev=8597.32 00:11:07.739 lat (usec): min=237, max=42270, avg=39611.64, stdev=8598.38 00:11:07.739 clat percentiles (usec): 00:11:07.739 | 1.00th=[ 223], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:07.739 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:07.739 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:07.739 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.739 | 99.99th=[42206] 00:11:07.739 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:11:07.739 slat (nsec): min=7464, max=66154, avg=17669.66, stdev=8321.08 00:11:07.739 clat (usec): min=148, max=505, avg=211.48, stdev=48.81 00:11:07.739 lat (usec): min=157, max=528, avg=229.15, stdev=50.50 00:11:07.739 clat percentiles (usec): 00:11:07.739 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 182], 00:11:07.739 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:11:07.739 | 70.00th=[ 217], 80.00th=[ 229], 90.00th=[ 269], 95.00th=[ 306], 00:11:07.739 | 99.00th=[ 404], 99.50th=[ 429], 99.90th=[ 506], 99.95th=[ 506], 00:11:07.739 | 99.99th=[ 506] 00:11:07.739 bw ( KiB/s): min= 4096, max= 4096, per=47.11%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.739 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.739 lat (usec) : 250=82.06%, 500=13.64%, 750=0.19% 00:11:07.739 lat (msec) : 50=4.11% 00:11:07.739 cpu : usr=0.68%, sys=1.07%, ctx=537, majf=0, minf=1 00:11:07.739 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.739 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.739 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.739 job2: (groupid=0, jobs=1): err= 0: pid=155330: Mon Nov 18 20:12:19 2024 00:11:07.739 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:11:07.739 slat (nsec): min=7396, max=34565, avg=22437.68, stdev=9416.03 00:11:07.739 clat (usec): min=40903, max=41966, avg=41059.10, stdev=296.05 00:11:07.739 lat (usec): min=40923, max=42000, avg=41081.53, stdev=296.84 00:11:07.739 clat percentiles (usec): 00:11:07.739 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:07.739 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:07.739 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:11:07.739 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.739 | 99.99th=[42206] 00:11:07.739 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:11:07.739 slat (nsec): min=6413, max=53662, avg=14547.80, stdev=6530.57 00:11:07.739 clat (usec): min=144, max=471, avg=208.64, stdev=36.94 00:11:07.739 lat (usec): min=153, max=487, avg=223.19, stdev=37.62 00:11:07.739 clat percentiles (usec): 00:11:07.739 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 180], 00:11:07.739 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 210], 00:11:07.739 | 70.00th=[ 219], 80.00th=[ 235], 90.00th=[ 253], 95.00th=[ 273], 00:11:07.739 | 99.00th=[ 343], 99.50th=[ 379], 99.90th=[ 474], 99.95th=[ 474], 00:11:07.739 | 99.99th=[ 474] 00:11:07.739 bw ( KiB/s): min= 4096, max= 4096, per=47.11%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.739 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.739 lat (usec) : 250=85.21%, 500=10.67% 00:11:07.739 lat (msec) : 50=4.12% 00:11:07.739 cpu : usr=0.39%, sys=0.69%, ctx=535, majf=0, minf=1 00:11:07.739 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.739 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.739 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.739 job3: (groupid=0, jobs=1): err= 0: pid=155342: Mon Nov 18 20:12:19 2024 00:11:07.739 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:07.739 slat (nsec): min=8130, max=60350, avg=15642.42, stdev=6779.04 00:11:07.739 clat (usec): min=208, max=42287, avg=1579.65, stdev=7133.78 00:11:07.739 lat (usec): min=218, max=42302, avg=1595.29, stdev=7134.26 00:11:07.739 clat percentiles (usec): 00:11:07.739 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 239], 00:11:07.739 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:11:07.739 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 371], 95.00th=[ 529], 00:11:07.739 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:07.739 | 99.99th=[42206] 00:11:07.739 write: IOPS=648, BW=2593KiB/s (2656kB/s)(2596KiB/1001msec); 0 zone resets 00:11:07.739 slat (nsec): min=7218, max=64203, avg=17946.68, stdev=9526.94 00:11:07.739 clat (usec): min=146, max=3449, avg=256.03, stdev=144.92 00:11:07.739 lat (usec): min=154, max=3461, avg=273.97, stdev=144.87 00:11:07.739 clat percentiles (usec): 00:11:07.739 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 186], 00:11:07.739 | 30.00th=[ 206], 40.00th=[ 221], 50.00th=[ 235], 60.00th=[ 255], 00:11:07.739 | 70.00th=[ 281], 80.00th=[ 310], 90.00th=[ 371], 95.00th=[ 396], 00:11:07.739 | 99.00th=[ 441], 99.50th=[ 449], 99.90th=[ 3458], 99.95th=[ 3458], 00:11:07.739 | 99.99th=[ 3458] 00:11:07.739 bw ( KiB/s): min= 4096, max= 4096, per=47.11%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.739 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.739 lat (usec) : 250=52.28%, 500=45.05%, 750=1.03% 00:11:07.739 lat (msec) : 4=0.17%, 20=0.09%, 50=1.38% 00:11:07.739 cpu : usr=1.30%, sys=2.20%, ctx=1162, majf=0, minf=1 00:11:07.739 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.739 issued rwts: total=512,649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.739 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.739 00:11:07.739 Run status group 0 (all jobs): 00:11:07.739 READ: bw=4147KiB/s (4247kB/s), 86.3KiB/s-2046KiB/s (88.3kB/s-2095kB/s), io=4276KiB (4379kB), run=1001-1031msec 00:11:07.739 WRITE: bw=8694KiB/s (8903kB/s), 1986KiB/s-2593KiB/s (2034kB/s-2656kB/s), io=8964KiB (9179kB), run=1001-1031msec 00:11:07.739 00:11:07.739 Disk stats (read/write): 00:11:07.739 nvme0n1: ios=21/512, merge=0/0, ticks=662/135, in_queue=797, util=85.17% 00:11:07.739 nvme0n2: ios=42/512, merge=0/0, ticks=1699/95, in_queue=1794, util=96.54% 00:11:07.739 nvme0n3: ios=40/512, merge=0/0, ticks=1654/108, in_queue=1762, util=97.49% 00:11:07.739 nvme0n4: ios=124/512, merge=0/0, ticks=1632/135, in_queue=1767, util=96.31% 00:11:07.739 20:12:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:07.739 [global] 00:11:07.739 thread=1 00:11:07.739 invalidate=1 00:11:07.739 rw=write 00:11:07.739 time_based=1 00:11:07.739 runtime=1 00:11:07.739 ioengine=libaio 00:11:07.739 direct=1 00:11:07.739 bs=4096 00:11:07.739 iodepth=128 00:11:07.739 norandommap=0 00:11:07.739 numjobs=1 00:11:07.739 00:11:07.739 verify_dump=1 00:11:07.739 verify_backlog=512 00:11:07.739 verify_state_save=0 00:11:07.739 do_verify=1 00:11:07.739 verify=crc32c-intel 00:11:07.739 [job0] 00:11:07.739 filename=/dev/nvme0n1 00:11:07.739 [job1] 00:11:07.739 filename=/dev/nvme0n2 00:11:07.739 [job2] 00:11:07.739 filename=/dev/nvme0n3 00:11:07.739 [job3] 00:11:07.739 filename=/dev/nvme0n4 00:11:07.739 Could not set queue depth (nvme0n1) 00:11:07.739 Could not set queue depth (nvme0n2) 00:11:07.739 Could not set queue depth (nvme0n3) 00:11:07.740 Could not set queue depth (nvme0n4) 00:11:07.999 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.999 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.999 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.999 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.999 fio-3.35 00:11:07.999 Starting 4 threads 00:11:09.385 00:11:09.385 job0: (groupid=0, jobs=1): err= 0: pid=155594: Mon Nov 18 20:12:20 2024 00:11:09.385 read: IOPS=2792, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1007msec) 00:11:09.385 slat (usec): min=2, max=32522, avg=168.18, stdev=1291.04 00:11:09.385 clat (usec): min=4846, max=74002, avg=19998.92, stdev=12226.40 00:11:09.385 lat (usec): min=7742, max=74041, avg=20167.10, stdev=12331.43 00:11:09.385 clat percentiles (usec): 00:11:09.385 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11076], 00:11:09.385 | 30.00th=[11338], 40.00th=[12780], 50.00th=[13435], 60.00th=[17433], 00:11:09.385 | 70.00th=[22938], 80.00th=[27132], 90.00th=[41681], 95.00th=[45876], 00:11:09.385 | 99.00th=[56361], 99.50th=[56361], 99.90th=[57410], 99.95th=[67634], 00:11:09.385 | 99.99th=[73925] 00:11:09.385 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:11:09.385 slat (usec): min=3, max=24766, avg=163.49, stdev=910.12 00:11:09.385 clat (usec): min=5730, max=67518, avg=23139.46, stdev=14562.40 00:11:09.385 lat (usec): min=5742, max=67540, avg=23302.95, stdev=14646.57 00:11:09.385 clat percentiles (usec): 00:11:09.385 | 1.00th=[ 6980], 5.00th=[ 7832], 10.00th=[ 9634], 20.00th=[11731], 00:11:09.385 | 30.00th=[12518], 40.00th=[14222], 50.00th=[21103], 60.00th=[21890], 00:11:09.385 | 70.00th=[26346], 80.00th=[30016], 90.00th=[46924], 95.00th=[58459], 00:11:09.385 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:11:09.385 | 99.99th=[67634] 00:11:09.385 bw ( KiB/s): min= 8192, max=16384, per=23.49%, avg=12288.00, stdev=5792.62, samples=2 00:11:09.385 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:11:09.385 lat (msec) : 10=7.92%, 20=46.11%, 50=40.58%, 100=5.39% 00:11:09.385 cpu : usr=2.98%, sys=5.37%, ctx=344, majf=0, minf=2 00:11:09.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:09.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.385 issued rwts: total=2812,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.385 job1: (groupid=0, jobs=1): err= 0: pid=155595: Mon Nov 18 20:12:20 2024 00:11:09.385 read: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(10.0MiB/1017msec) 00:11:09.386 slat (usec): min=2, max=52551, avg=199.51, stdev=1925.44 00:11:09.386 clat (msec): min=7, max=148, avg=26.23, stdev=23.98 00:11:09.386 lat (msec): min=7, max=148, avg=26.43, stdev=24.18 00:11:09.386 clat percentiles (msec): 00:11:09.386 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:11:09.386 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 17], 00:11:09.386 | 70.00th=[ 23], 80.00th=[ 37], 90.00th=[ 57], 95.00th=[ 85], 00:11:09.386 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 117], 99.95th=[ 117], 00:11:09.386 | 99.99th=[ 148] 00:11:09.386 write: IOPS=2672, BW=10.4MiB/s (10.9MB/s)(10.6MiB/1017msec); 0 zone resets 00:11:09.386 slat (usec): min=3, max=26367, avg=170.28, stdev=1231.54 00:11:09.386 clat (usec): min=4128, max=69969, avg=21770.13, stdev=13981.27 00:11:09.386 lat (usec): min=4144, max=69988, avg=21940.42, stdev=14086.17 00:11:09.386 clat percentiles (usec): 00:11:09.386 | 1.00th=[ 7308], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[11207], 00:11:09.386 | 30.00th=[13042], 40.00th=[13698], 50.00th=[16712], 60.00th=[20317], 00:11:09.386 | 70.00th=[24511], 80.00th=[30278], 90.00th=[43779], 95.00th=[55313], 00:11:09.386 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:11:09.386 | 99.99th=[69731] 00:11:09.386 bw ( KiB/s): min= 8192, max=12536, per=19.81%, avg=10364.00, stdev=3071.67, samples=2 00:11:09.386 iops : min= 2048, max= 3134, avg=2591.00, stdev=767.92, samples=2 00:11:09.386 lat (msec) : 10=5.91%, 20=58.05%, 50=26.56%, 100=8.20%, 250=1.27% 00:11:09.386 cpu : usr=3.25%, sys=5.31%, ctx=175, majf=0, minf=1 00:11:09.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:09.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.386 issued rwts: total=2560,2718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.386 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.386 job2: (groupid=0, jobs=1): err= 0: pid=155596: Mon Nov 18 20:12:20 2024 00:11:09.386 read: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(10.0MiB/1017msec) 00:11:09.386 slat (usec): min=2, max=20085, avg=127.97, stdev=932.81 00:11:09.386 clat (usec): min=3408, max=58411, avg=18017.65, stdev=7862.34 00:11:09.386 lat (usec): min=3569, max=58419, avg=18145.62, stdev=7894.46 00:11:09.386 clat percentiles (usec): 00:11:09.386 | 1.00th=[ 3621], 5.00th=[ 7767], 10.00th=[11731], 20.00th=[12780], 00:11:09.386 | 30.00th=[13566], 40.00th=[13960], 50.00th=[15926], 60.00th=[16712], 00:11:09.386 | 70.00th=[19006], 80.00th=[23200], 90.00th=[29230], 95.00th=[36963], 00:11:09.386 | 99.00th=[38536], 99.50th=[38536], 99.90th=[41681], 99.95th=[45876], 00:11:09.386 | 99.99th=[58459] 00:11:09.386 write: IOPS=2853, BW=11.1MiB/s (11.7MB/s)(11.3MiB/1017msec); 0 zone resets 00:11:09.386 slat (usec): min=4, max=42044, avg=210.13, stdev=1350.29 00:11:09.386 clat (usec): min=959, max=106254, avg=25803.27, stdev=20523.04 00:11:09.386 lat (usec): min=970, max=106282, avg=26013.40, stdev=20658.51 00:11:09.386 clat percentiles (msec): 00:11:09.386 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:11:09.386 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 22], 00:11:09.386 | 70.00th=[ 30], 80.00th=[ 34], 90.00th=[ 53], 95.00th=[ 73], 00:11:09.386 | 99.00th=[ 103], 99.50th=[ 106], 99.90th=[ 107], 99.95th=[ 107], 00:11:09.386 | 99.99th=[ 107] 00:11:09.386 bw ( KiB/s): min= 8968, max=13232, per=21.22%, avg=11100.00, stdev=3015.10, samples=2 00:11:09.386 iops : min= 2242, max= 3308, avg=2775.00, stdev=753.78, samples=2 00:11:09.386 lat (usec) : 1000=0.09% 00:11:09.386 lat (msec) : 4=1.32%, 10=6.39%, 20=57.09%, 50=29.55%, 100=4.85% 00:11:09.386 lat (msec) : 250=0.71% 00:11:09.386 cpu : usr=3.84%, sys=4.92%, ctx=231, majf=0, minf=1 00:11:09.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:09.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.386 issued rwts: total=2560,2902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.386 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.386 job3: (groupid=0, jobs=1): err= 0: pid=155597: Mon Nov 18 20:12:20 2024 00:11:09.386 read: IOPS=4237, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1004msec) 00:11:09.386 slat (usec): min=2, max=13469, avg=108.99, stdev=775.82 00:11:09.386 clat (usec): min=3063, max=33883, avg=13467.43, stdev=4554.93 00:11:09.386 lat (usec): min=3077, max=33900, avg=13576.42, stdev=4607.18 00:11:09.386 clat percentiles (usec): 00:11:09.386 | 1.00th=[ 5145], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10421], 00:11:09.386 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:11:09.386 | 70.00th=[14353], 80.00th=[16712], 90.00th=[20055], 95.00th=[23987], 00:11:09.386 | 99.00th=[28705], 99.50th=[29754], 99.90th=[31589], 99.95th=[31851], 00:11:09.386 | 99.99th=[33817] 00:11:09.386 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:11:09.386 slat (usec): min=4, max=16965, avg=105.94, stdev=531.59 00:11:09.386 clat (usec): min=3461, max=46299, avg=14587.41, stdev=6985.98 00:11:09.386 lat (usec): min=3473, max=46324, avg=14693.35, stdev=7033.34 00:11:09.386 clat percentiles (usec): 00:11:09.386 | 1.00th=[ 4424], 5.00th=[ 6063], 10.00th=[ 7504], 20.00th=[10290], 00:11:09.386 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[13435], 00:11:09.386 | 70.00th=[15926], 80.00th=[20841], 90.00th=[22676], 95.00th=[27132], 00:11:09.386 | 99.00th=[42206], 99.50th=[43779], 99.90th=[46400], 99.95th=[46400], 00:11:09.386 | 99.99th=[46400] 00:11:09.386 bw ( KiB/s): min=16624, max=20240, per=35.24%, avg=18432.00, stdev=2556.90, samples=2 00:11:09.386 iops : min= 4156, max= 5060, avg=4608.00, stdev=639.22, samples=2 00:11:09.386 lat (msec) : 4=0.58%, 10=13.71%, 20=68.64%, 50=17.07% 00:11:09.386 cpu : usr=5.88%, sys=9.47%, ctx=527, majf=0, minf=1 00:11:09.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:09.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.386 issued rwts: total=4254,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.386 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.386 00:11:09.386 Run status group 0 (all jobs): 00:11:09.386 READ: bw=46.8MiB/s (49.1MB/s), 9.83MiB/s-16.6MiB/s (10.3MB/s-17.4MB/s), io=47.6MiB (49.9MB), run=1004-1017msec 00:11:09.386 WRITE: bw=51.1MiB/s (53.6MB/s), 10.4MiB/s-17.9MiB/s (10.9MB/s-18.8MB/s), io=52.0MiB (54.5MB), run=1004-1017msec 00:11:09.386 00:11:09.386 Disk stats (read/write): 00:11:09.386 nvme0n1: ios=2603/2839, merge=0/0, ticks=27189/27287, in_queue=54476, util=98.50% 00:11:09.386 nvme0n2: ios=2098/2343, merge=0/0, ticks=25812/25018, in_queue=50830, util=91.77% 00:11:09.386 nvme0n3: ios=2108/2527, merge=0/0, ticks=37673/55922, in_queue=93595, util=95.29% 00:11:09.386 nvme0n4: ios=3509/3584, merge=0/0, ticks=45288/49232, in_queue=94520, util=100.00% 00:11:09.386 20:12:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:09.386 [global] 00:11:09.386 thread=1 00:11:09.386 invalidate=1 00:11:09.386 rw=randwrite 00:11:09.386 time_based=1 00:11:09.386 runtime=1 00:11:09.386 ioengine=libaio 00:11:09.386 direct=1 00:11:09.386 bs=4096 00:11:09.386 iodepth=128 00:11:09.386 norandommap=0 00:11:09.386 numjobs=1 00:11:09.386 00:11:09.386 verify_dump=1 00:11:09.386 verify_backlog=512 00:11:09.386 verify_state_save=0 00:11:09.386 do_verify=1 00:11:09.386 verify=crc32c-intel 00:11:09.386 [job0] 00:11:09.386 filename=/dev/nvme0n1 00:11:09.386 [job1] 00:11:09.386 filename=/dev/nvme0n2 00:11:09.386 [job2] 00:11:09.386 filename=/dev/nvme0n3 00:11:09.386 [job3] 00:11:09.386 filename=/dev/nvme0n4 00:11:09.386 Could not set queue depth (nvme0n1) 00:11:09.386 Could not set queue depth (nvme0n2) 00:11:09.386 Could not set queue depth (nvme0n3) 00:11:09.386 Could not set queue depth (nvme0n4) 00:11:09.386 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.386 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.386 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.386 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.386 fio-3.35 00:11:09.386 Starting 4 threads 00:11:10.772 00:11:10.773 job0: (groupid=0, jobs=1): err= 0: pid=155823: Mon Nov 18 20:12:22 2024 00:11:10.773 read: IOPS=4451, BW=17.4MiB/s (18.2MB/s)(17.5MiB/1004msec) 00:11:10.773 slat (usec): min=2, max=16200, avg=99.72, stdev=638.77 00:11:10.773 clat (usec): min=1390, max=56262, avg=13154.41, stdev=6625.03 00:11:10.773 lat (usec): min=1398, max=56298, avg=13254.13, stdev=6669.83 00:11:10.773 clat percentiles (usec): 00:11:10.773 | 1.00th=[ 3130], 5.00th=[ 7570], 10.00th=[ 9372], 20.00th=[10945], 00:11:10.773 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:11:10.773 | 70.00th=[12387], 80.00th=[12911], 90.00th=[19006], 95.00th=[26346], 00:11:10.773 | 99.00th=[44303], 99.50th=[44303], 99.90th=[51643], 99.95th=[51643], 00:11:10.773 | 99.99th=[56361] 00:11:10.773 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:11:10.773 slat (usec): min=3, max=47489, avg=106.52, stdev=942.75 00:11:10.773 clat (usec): min=7402, max=58514, avg=14741.37, stdev=9119.32 00:11:10.773 lat (usec): min=7454, max=58524, avg=14847.89, stdev=9157.17 00:11:10.773 clat percentiles (usec): 00:11:10.773 | 1.00th=[ 7963], 5.00th=[ 8848], 10.00th=[10290], 20.00th=[11076], 00:11:10.773 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:11:10.773 | 70.00th=[12911], 80.00th=[13829], 90.00th=[22152], 95.00th=[37487], 00:11:10.773 | 99.00th=[57934], 99.50th=[58459], 99.90th=[58459], 99.95th=[58459], 00:11:10.773 | 99.99th=[58459] 00:11:10.773 bw ( KiB/s): min=16384, max=20480, per=29.39%, avg=18432.00, stdev=2896.31, samples=2 00:11:10.773 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:11:10.773 lat (msec) : 2=0.11%, 4=1.58%, 10=8.47%, 20=78.75%, 50=9.62% 00:11:10.773 lat (msec) : 100=1.48% 00:11:10.773 cpu : usr=3.39%, sys=6.88%, ctx=513, majf=0, minf=1 00:11:10.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:10.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.773 issued rwts: total=4469,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.773 job1: (groupid=0, jobs=1): err= 0: pid=155828: Mon Nov 18 20:12:22 2024 00:11:10.773 read: IOPS=4317, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1007msec) 00:11:10.773 slat (usec): min=2, max=12144, avg=105.63, stdev=710.29 00:11:10.773 clat (usec): min=2403, max=51916, avg=12911.46, stdev=4750.08 00:11:10.773 lat (usec): min=4338, max=51933, avg=13017.09, stdev=4814.98 00:11:10.773 clat percentiles (usec): 00:11:10.773 | 1.00th=[ 5473], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10028], 00:11:10.773 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[12125], 00:11:10.773 | 70.00th=[13698], 80.00th=[15664], 90.00th=[19268], 95.00th=[21365], 00:11:10.773 | 99.00th=[27919], 99.50th=[36439], 99.90th=[52167], 99.95th=[52167], 00:11:10.773 | 99.99th=[52167] 00:11:10.773 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:11:10.773 slat (usec): min=3, max=12705, avg=107.73, stdev=631.57 00:11:10.773 clat (usec): min=1159, max=71104, avg=15535.85, stdev=12595.35 00:11:10.773 lat (usec): min=1168, max=71532, avg=15643.59, stdev=12664.49 00:11:10.773 clat percentiles (usec): 00:11:10.773 | 1.00th=[ 3916], 5.00th=[ 6521], 10.00th=[ 7832], 20.00th=[ 9110], 00:11:10.773 | 30.00th=[10683], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:11:10.773 | 70.00th=[12518], 80.00th=[15270], 90.00th=[27132], 95.00th=[51119], 00:11:10.773 | 99.00th=[64226], 99.50th=[66323], 99.90th=[70779], 99.95th=[70779], 00:11:10.773 | 99.99th=[70779] 00:11:10.773 bw ( KiB/s): min=12688, max=24127, per=29.35%, avg=18407.50, stdev=8088.59, samples=2 00:11:10.773 iops : min= 3172, max= 6031, avg=4601.50, stdev=2021.62, samples=2 00:11:10.773 lat (msec) : 2=0.04%, 4=0.54%, 10=18.90%, 20=68.55%, 50=9.33% 00:11:10.773 lat (msec) : 100=2.64% 00:11:10.773 cpu : usr=6.06%, sys=8.95%, ctx=523, majf=0, minf=1 00:11:10.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:10.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.773 issued rwts: total=4348,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.773 job2: (groupid=0, jobs=1): err= 0: pid=155829: Mon Nov 18 20:12:22 2024 00:11:10.773 read: IOPS=3154, BW=12.3MiB/s (12.9MB/s)(12.9MiB/1045msec) 00:11:10.773 slat (usec): min=2, max=26003, avg=162.75, stdev=1350.64 00:11:10.773 clat (msec): min=5, max=106, avg=20.90, stdev=16.60 00:11:10.773 lat (msec): min=5, max=106, avg=21.06, stdev=16.72 00:11:10.773 clat percentiles (msec): 00:11:10.773 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:11:10.773 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 17], 00:11:10.773 | 70.00th=[ 20], 80.00th=[ 24], 90.00th=[ 48], 95.00th=[ 67], 00:11:10.773 | 99.00th=[ 83], 99.50th=[ 87], 99.90th=[ 88], 99.95th=[ 95], 00:11:10.773 | 99.99th=[ 107] 00:11:10.773 write: IOPS=3429, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1045msec); 0 zone resets 00:11:10.773 slat (usec): min=3, max=29404, avg=102.76, stdev=969.46 00:11:10.773 clat (usec): min=410, max=73077, avg=17675.02, stdev=10243.39 00:11:10.773 lat (usec): min=479, max=73122, avg=17777.78, stdev=10301.01 00:11:10.773 clat percentiles (usec): 00:11:10.773 | 1.00th=[ 1074], 5.00th=[ 6128], 10.00th=[ 7111], 20.00th=[10290], 00:11:10.773 | 30.00th=[12518], 40.00th=[13304], 50.00th=[14746], 60.00th=[17695], 00:11:10.773 | 70.00th=[20317], 80.00th=[25560], 90.00th=[30278], 95.00th=[35914], 00:11:10.773 | 99.00th=[57410], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:11:10.773 | 99.99th=[72877] 00:11:10.773 bw ( KiB/s): min=12632, max=16007, per=22.83%, avg=14319.50, stdev=2386.49, samples=2 00:11:10.773 iops : min= 3158, max= 4001, avg=3579.50, stdev=596.09, samples=2 00:11:10.773 lat (usec) : 500=0.04%, 750=0.04%, 1000=0.26% 00:11:10.773 lat (msec) : 2=0.44%, 4=1.03%, 10=11.53%, 20=56.66%, 50=24.56% 00:11:10.773 lat (msec) : 100=5.42%, 250=0.01% 00:11:10.773 cpu : usr=2.49%, sys=5.36%, ctx=215, majf=0, minf=1 00:11:10.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:10.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.773 issued rwts: total=3296,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.773 job3: (groupid=0, jobs=1): err= 0: pid=155830: Mon Nov 18 20:12:22 2024 00:11:10.773 read: IOPS=3237, BW=12.6MiB/s (13.3MB/s)(12.7MiB/1002msec) 00:11:10.773 slat (usec): min=2, max=12509, avg=109.57, stdev=726.90 00:11:10.773 clat (usec): min=589, max=76545, avg=15500.09, stdev=5726.43 00:11:10.773 lat (usec): min=4112, max=76550, avg=15609.66, stdev=5770.13 00:11:10.773 clat percentiles (usec): 00:11:10.773 | 1.00th=[ 4883], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[11863], 00:11:10.773 | 30.00th=[12780], 40.00th=[13435], 50.00th=[14877], 60.00th=[15664], 00:11:10.773 | 70.00th=[16909], 80.00th=[19006], 90.00th=[20841], 95.00th=[23987], 00:11:10.773 | 99.00th=[28181], 99.50th=[29230], 99.90th=[72877], 99.95th=[72877], 00:11:10.773 | 99.99th=[76022] 00:11:10.773 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:11:10.773 slat (usec): min=3, max=21805, avg=149.72, stdev=1001.64 00:11:10.773 clat (usec): min=5723, max=74748, avg=21367.99, stdev=12759.46 00:11:10.773 lat (usec): min=5735, max=74766, avg=21517.71, stdev=12856.25 00:11:10.773 clat percentiles (usec): 00:11:10.773 | 1.00th=[ 6915], 5.00th=[11076], 10.00th=[11731], 20.00th=[12911], 00:11:10.773 | 30.00th=[13304], 40.00th=[14091], 50.00th=[16188], 60.00th=[18482], 00:11:10.773 | 70.00th=[23725], 80.00th=[28181], 90.00th=[39060], 95.00th=[52691], 00:11:10.773 | 99.00th=[66323], 99.50th=[67634], 99.90th=[74974], 99.95th=[74974], 00:11:10.773 | 99.99th=[74974] 00:11:10.773 bw ( KiB/s): min=11329, max=17320, per=22.84%, avg=14324.50, stdev=4236.28, samples=2 00:11:10.773 iops : min= 2832, max= 4330, avg=3581.00, stdev=1059.25, samples=2 00:11:10.773 lat (usec) : 750=0.01% 00:11:10.773 lat (msec) : 10=5.02%, 20=67.56%, 50=24.38%, 100=3.02% 00:11:10.773 cpu : usr=4.80%, sys=6.79%, ctx=296, majf=0, minf=1 00:11:10.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:10.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.773 issued rwts: total=3244,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.773 00:11:10.773 Run status group 0 (all jobs): 00:11:10.773 READ: bw=57.4MiB/s (60.2MB/s), 12.3MiB/s-17.4MiB/s (12.9MB/s-18.2MB/s), io=60.0MiB (62.9MB), run=1002-1045msec 00:11:10.773 WRITE: bw=61.2MiB/s (64.2MB/s), 13.4MiB/s-17.9MiB/s (14.0MB/s-18.8MB/s), io=64.0MiB (67.1MB), run=1002-1045msec 00:11:10.773 00:11:10.773 Disk stats (read/write): 00:11:10.773 nvme0n1: ios=3741/4096, merge=0/0, ticks=17756/20740, in_queue=38496, util=95.49% 00:11:10.773 nvme0n2: ios=3751/4096, merge=0/0, ticks=44635/56573, in_queue=101208, util=88.81% 00:11:10.773 nvme0n3: ios=2612/2871, merge=0/0, ticks=33837/35293, in_queue=69130, util=98.30% 00:11:10.773 nvme0n4: ios=2579/2699, merge=0/0, ticks=28790/51444, in_queue=80234, util=98.06% 00:11:10.773 20:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:10.773 20:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=155963 00:11:10.773 20:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:10.773 20:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:10.773 [global] 00:11:10.773 thread=1 00:11:10.773 invalidate=1 00:11:10.773 rw=read 00:11:10.773 time_based=1 00:11:10.773 runtime=10 00:11:10.773 ioengine=libaio 00:11:10.773 direct=1 00:11:10.773 bs=4096 00:11:10.773 iodepth=1 00:11:10.773 norandommap=1 00:11:10.773 numjobs=1 00:11:10.773 00:11:10.773 [job0] 00:11:10.773 filename=/dev/nvme0n1 00:11:10.773 [job1] 00:11:10.773 filename=/dev/nvme0n2 00:11:10.773 [job2] 00:11:10.774 filename=/dev/nvme0n3 00:11:10.774 [job3] 00:11:10.774 filename=/dev/nvme0n4 00:11:10.774 Could not set queue depth (nvme0n1) 00:11:10.774 Could not set queue depth (nvme0n2) 00:11:10.774 Could not set queue depth (nvme0n3) 00:11:10.774 Could not set queue depth (nvme0n4) 00:11:10.774 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.774 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.774 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.774 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.774 fio-3.35 00:11:10.774 Starting 4 threads 00:11:14.072 20:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:14.072 20:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:14.072 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=41902080, buflen=4096 00:11:14.072 fio: pid=156069, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.330 20:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.330 20:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:14.331 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=14929920, buflen=4096 00:11:14.331 fio: pid=156068, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.590 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=4231168, buflen=4096 00:11:14.590 fio: pid=156065, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.590 20:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.590 20:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:14.850 20:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.850 20:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:14.850 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52797440, buflen=4096 00:11:14.850 fio: pid=156066, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.850 00:11:14.850 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156065: Mon Nov 18 20:12:26 2024 00:11:14.850 read: IOPS=295, BW=1182KiB/s (1210kB/s)(4132KiB/3497msec) 00:11:14.850 slat (usec): min=4, max=13938, avg=30.84, stdev=549.48 00:11:14.850 clat (usec): min=174, max=47952, avg=3343.46, stdev=10864.57 00:11:14.850 lat (usec): min=178, max=54991, avg=3374.32, stdev=10936.86 00:11:14.850 clat percentiles (usec): 00:11:14.850 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:11:14.850 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:11:14.850 | 70.00th=[ 221], 80.00th=[ 237], 90.00th=[ 285], 95.00th=[41157], 00:11:14.850 | 99.00th=[41157], 99.50th=[42206], 99.90th=[44827], 99.95th=[47973], 00:11:14.850 | 99.99th=[47973] 00:11:14.850 bw ( KiB/s): min= 96, max= 904, per=0.98%, avg=285.33, stdev=327.19, samples=6 00:11:14.850 iops : min= 24, max= 226, avg=71.33, stdev=81.80, samples=6 00:11:14.850 lat (usec) : 250=82.88%, 500=9.28% 00:11:14.850 lat (msec) : 20=0.10%, 50=7.64% 00:11:14.850 cpu : usr=0.11%, sys=0.20%, ctx=1037, majf=0, minf=2 00:11:14.850 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.850 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.850 issued rwts: total=1034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.850 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.850 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156066: Mon Nov 18 20:12:26 2024 00:11:14.850 read: IOPS=3388, BW=13.2MiB/s (13.9MB/s)(50.4MiB/3804msec) 00:11:14.850 slat (usec): min=4, max=12881, avg=14.05, stdev=185.42 00:11:14.850 clat (usec): min=164, max=42125, avg=276.75, stdev=1349.56 00:11:14.850 lat (usec): min=169, max=55006, avg=290.79, stdev=1415.00 00:11:14.850 clat percentiles (usec): 00:11:14.850 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 196], 00:11:14.850 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 221], 00:11:14.850 | 70.00th=[ 237], 80.00th=[ 253], 90.00th=[ 330], 95.00th=[ 363], 00:11:14.850 | 99.00th=[ 388], 99.50th=[ 400], 99.90th=[41157], 99.95th=[41157], 00:11:14.850 | 99.99th=[42206] 00:11:14.850 bw ( KiB/s): min= 8233, max=19232, per=49.57%, avg=14490.43, stdev=3704.74, samples=7 00:11:14.850 iops : min= 2058, max= 4808, avg=3622.57, stdev=926.26, samples=7 00:11:14.850 lat (usec) : 250=78.71%, 500=21.07%, 750=0.09%, 1000=0.02% 00:11:14.850 lat (msec) : 2=0.01%, 50=0.11% 00:11:14.850 cpu : usr=1.71%, sys=4.18%, ctx=12898, majf=0, minf=2 00:11:14.850 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.850 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.850 issued rwts: total=12891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.850 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.850 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156068: Mon Nov 18 20:12:26 2024 00:11:14.850 read: IOPS=1120, BW=4479KiB/s (4587kB/s)(14.2MiB/3255msec) 00:11:14.850 slat (nsec): min=5767, max=50144, avg=12923.51, stdev=5718.11 00:11:14.850 clat (usec): min=197, max=42121, avg=869.90, stdev=4791.16 00:11:14.850 lat (usec): min=206, max=42137, avg=882.82, stdev=4791.93 00:11:14.850 clat percentiles (usec): 00:11:14.850 | 1.00th=[ 217], 5.00th=[ 245], 10.00th=[ 265], 20.00th=[ 281], 00:11:14.850 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 310], 00:11:14.850 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 330], 95.00th=[ 343], 00:11:14.850 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:14.850 | 99.99th=[42206] 00:11:14.850 bw ( KiB/s): min= 96, max=12520, per=16.60%, avg=4852.00, stdev=4786.24, samples=6 00:11:14.850 iops : min= 24, max= 3130, avg=1213.00, stdev=1196.56, samples=6 00:11:14.850 lat (usec) : 250=6.14%, 500=92.35%, 750=0.08% 00:11:14.850 lat (msec) : 50=1.40% 00:11:14.850 cpu : usr=0.95%, sys=2.30%, ctx=3647, majf=0, minf=1 00:11:14.850 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.850 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.850 issued rwts: total=3646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.850 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.850 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156069: Mon Nov 18 20:12:26 2024 00:11:14.850 read: IOPS=3488, BW=13.6MiB/s (14.3MB/s)(40.0MiB/2933msec) 00:11:14.850 slat (nsec): min=5381, max=63791, avg=12040.08, stdev=5794.88 00:11:14.850 clat (usec): min=183, max=3100, avg=269.11, stdev=60.31 00:11:14.850 lat (usec): min=188, max=3111, avg=281.15, stdev=62.99 00:11:14.850 clat percentiles (usec): 00:11:14.850 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 221], 20.00th=[ 237], 00:11:14.850 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 273], 00:11:14.850 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 326], 00:11:14.850 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 578], 99.95th=[ 586], 00:11:14.850 | 99.99th=[ 635] 00:11:14.850 bw ( KiB/s): min=13256, max=14272, per=47.76%, avg=13961.60, stdev=416.90, samples=5 00:11:14.850 iops : min= 3314, max= 3568, avg=3490.40, stdev=104.22, samples=5 00:11:14.850 lat (usec) : 250=36.05%, 500=62.23%, 750=1.70% 00:11:14.850 lat (msec) : 4=0.01% 00:11:14.850 cpu : usr=2.83%, sys=6.72%, ctx=10231, majf=0, minf=1 00:11:14.850 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.850 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.850 issued rwts: total=10231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.850 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.850 00:11:14.850 Run status group 0 (all jobs): 00:11:14.850 READ: bw=28.5MiB/s (29.9MB/s), 1182KiB/s-13.6MiB/s (1210kB/s-14.3MB/s), io=109MiB (114MB), run=2933-3804msec 00:11:14.850 00:11:14.850 Disk stats (read/write): 00:11:14.850 nvme0n1: ios=406/0, merge=0/0, ticks=3322/0, in_queue=3322, util=95.34% 00:11:14.850 nvme0n2: ios=12927/0, merge=0/0, ticks=3804/0, in_queue=3804, util=98.61% 00:11:14.850 nvme0n3: ios=3641/0, merge=0/0, ticks=2986/0, in_queue=2986, util=96.79% 00:11:14.850 nvme0n4: ios=9979/0, merge=0/0, ticks=2539/0, in_queue=2539, util=96.71% 00:11:15.109 20:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.109 20:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:15.369 20:12:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.369 20:12:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:15.627 20:12:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.627 20:12:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:15.886 20:12:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.886 20:12:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:16.146 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:16.146 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 155963 00:11:16.146 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:16.146 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.405 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.405 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:16.405 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:16.405 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.405 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:16.405 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.405 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:16.405 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:16.405 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:16.405 nvmf hotplug test: fio failed as expected 00:11:16.405 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.667 rmmod nvme_tcp 00:11:16.667 rmmod nvme_fabrics 00:11:16.667 rmmod nvme_keyring 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 153929 ']' 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 153929 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 153929 ']' 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 153929 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 153929 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 153929' 00:11:16.667 killing process with pid 153929 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 153929 00:11:16.667 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 153929 00:11:16.928 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.928 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.928 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.928 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:16.928 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:16.928 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.928 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.928 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.928 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.928 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.928 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.928 20:12:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.477 20:12:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.477 00:11:19.477 real 0m24.179s 00:11:19.477 user 1m24.783s 00:11:19.477 sys 0m7.357s 00:11:19.477 20:12:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.477 20:12:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.477 ************************************ 00:11:19.477 END TEST nvmf_fio_target 00:11:19.477 ************************************ 00:11:19.477 20:12:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:19.477 20:12:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.477 20:12:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.477 20:12:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:19.477 ************************************ 00:11:19.477 START TEST nvmf_bdevio 00:11:19.477 ************************************ 00:11:19.477 20:12:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:19.477 * Looking for test storage... 00:11:19.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.477 20:12:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:19.477 20:12:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:19.477 20:12:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.477 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:19.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.478 --rc genhtml_branch_coverage=1 00:11:19.478 --rc genhtml_function_coverage=1 00:11:19.478 --rc genhtml_legend=1 00:11:19.478 --rc geninfo_all_blocks=1 00:11:19.478 --rc geninfo_unexecuted_blocks=1 00:11:19.478 00:11:19.478 ' 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:19.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.478 --rc genhtml_branch_coverage=1 00:11:19.478 --rc genhtml_function_coverage=1 00:11:19.478 --rc genhtml_legend=1 00:11:19.478 --rc geninfo_all_blocks=1 00:11:19.478 --rc geninfo_unexecuted_blocks=1 00:11:19.478 00:11:19.478 ' 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:19.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.478 --rc genhtml_branch_coverage=1 00:11:19.478 --rc genhtml_function_coverage=1 00:11:19.478 --rc genhtml_legend=1 00:11:19.478 --rc geninfo_all_blocks=1 00:11:19.478 --rc geninfo_unexecuted_blocks=1 00:11:19.478 00:11:19.478 ' 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:19.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.478 --rc genhtml_branch_coverage=1 00:11:19.478 --rc genhtml_function_coverage=1 00:11:19.478 --rc genhtml_legend=1 00:11:19.478 --rc geninfo_all_blocks=1 00:11:19.478 --rc geninfo_unexecuted_blocks=1 00:11:19.478 00:11:19.478 ' 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.478 20:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.388 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:21.389 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:21.389 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:21.389 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:21.389 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.389 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.648 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.648 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.648 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.648 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:11:21.649 00:11:21.649 --- 10.0.0.2 ping statistics --- 00:11:21.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.649 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:11:21.649 00:11:21.649 --- 10.0.0.1 ping statistics --- 00:11:21.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.649 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=158815 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 158815 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 158815 ']' 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.649 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.649 [2024-11-18 20:12:33.612409] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:11:21.649 [2024-11-18 20:12:33.612512] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.908 [2024-11-18 20:12:33.684033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.908 [2024-11-18 20:12:33.727715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.908 [2024-11-18 20:12:33.727772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.908 [2024-11-18 20:12:33.727795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.908 [2024-11-18 20:12:33.727805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.908 [2024-11-18 20:12:33.727815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.908 [2024-11-18 20:12:33.729340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:21.908 [2024-11-18 20:12:33.729403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:21.908 [2024-11-18 20:12:33.729467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:21.908 [2024-11-18 20:12:33.729470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.908 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.908 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:21.908 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.908 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.908 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.908 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.908 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.908 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.908 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.908 [2024-11-18 20:12:33.873201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.908 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.908 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:21.908 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.908 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.167 Malloc0 00:11:22.167 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.167 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:22.167 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.167 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.167 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.167 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.168 [2024-11-18 20:12:33.943384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:22.168 { 00:11:22.168 "params": { 00:11:22.168 "name": "Nvme$subsystem", 00:11:22.168 "trtype": "$TEST_TRANSPORT", 00:11:22.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:22.168 "adrfam": "ipv4", 00:11:22.168 "trsvcid": "$NVMF_PORT", 00:11:22.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:22.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:22.168 "hdgst": ${hdgst:-false}, 00:11:22.168 "ddgst": ${ddgst:-false} 00:11:22.168 }, 00:11:22.168 "method": "bdev_nvme_attach_controller" 00:11:22.168 } 00:11:22.168 EOF 00:11:22.168 )") 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:22.168 20:12:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:22.168 "params": { 00:11:22.168 "name": "Nvme1", 00:11:22.168 "trtype": "tcp", 00:11:22.168 "traddr": "10.0.0.2", 00:11:22.168 "adrfam": "ipv4", 00:11:22.168 "trsvcid": "4420", 00:11:22.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:22.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:22.168 "hdgst": false, 00:11:22.168 "ddgst": false 00:11:22.168 }, 00:11:22.168 "method": "bdev_nvme_attach_controller" 00:11:22.168 }' 00:11:22.168 [2024-11-18 20:12:33.990615] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:11:22.168 [2024-11-18 20:12:33.990728] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158848 ] 00:11:22.168 [2024-11-18 20:12:34.061304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:22.168 [2024-11-18 20:12:34.111816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.168 [2024-11-18 20:12:34.111870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.168 [2024-11-18 20:12:34.111873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.428 I/O targets: 00:11:22.428 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:22.428 00:11:22.428 00:11:22.428 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.428 http://cunit.sourceforge.net/ 00:11:22.428 00:11:22.428 00:11:22.428 Suite: bdevio tests on: Nvme1n1 00:11:22.428 Test: blockdev write read block ...passed 00:11:22.428 Test: blockdev write zeroes read block ...passed 00:11:22.428 Test: blockdev write zeroes read no split ...passed 00:11:22.428 Test: blockdev write zeroes read split ...passed 00:11:22.428 Test: blockdev write zeroes read split partial ...passed 00:11:22.428 Test: blockdev reset ...[2024-11-18 20:12:34.398223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:22.428 [2024-11-18 20:12:34.398333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1913b70 (9): Bad file descriptor 00:11:22.690 [2024-11-18 20:12:34.495116] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:22.690 passed 00:11:22.690 Test: blockdev write read 8 blocks ...passed 00:11:22.690 Test: blockdev write read size > 128k ...passed 00:11:22.690 Test: blockdev write read invalid size ...passed 00:11:22.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.690 Test: blockdev write read max offset ...passed 00:11:22.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.953 Test: blockdev writev readv 8 blocks ...passed 00:11:22.953 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.953 Test: blockdev writev readv block ...passed 00:11:22.953 Test: blockdev writev readv size > 128k ...passed 00:11:22.953 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.953 Test: blockdev comparev and writev ...[2024-11-18 20:12:34.749984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.953 [2024-11-18 20:12:34.750023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:22.953 [2024-11-18 20:12:34.750048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.953 [2024-11-18 20:12:34.750066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:22.953 [2024-11-18 20:12:34.750411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.953 [2024-11-18 20:12:34.750444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:22.953 [2024-11-18 20:12:34.750467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.953 [2024-11-18 20:12:34.750484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:22.953 [2024-11-18 20:12:34.750824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.953 [2024-11-18 20:12:34.750848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:22.953 [2024-11-18 20:12:34.750870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.953 [2024-11-18 20:12:34.750888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:22.953 [2024-11-18 20:12:34.751244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.953 [2024-11-18 20:12:34.751270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:22.953 [2024-11-18 20:12:34.751292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.953 [2024-11-18 20:12:34.751310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:22.953 passed 00:11:22.953 Test: blockdev nvme passthru rw ...passed 00:11:22.953 Test: blockdev nvme passthru vendor specific ...[2024-11-18 20:12:34.832922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.953 [2024-11-18 20:12:34.832951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:22.953 [2024-11-18 20:12:34.833086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.953 [2024-11-18 20:12:34.833109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:22.953 [2024-11-18 20:12:34.833243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.953 [2024-11-18 20:12:34.833267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:22.953 [2024-11-18 20:12:34.833404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.953 [2024-11-18 20:12:34.833426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:22.953 passed 00:11:22.953 Test: blockdev nvme admin passthru ...passed 00:11:22.953 Test: blockdev copy ...passed 00:11:22.953 00:11:22.953 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.953 suites 1 1 n/a 0 0 00:11:22.953 tests 23 23 23 0 0 00:11:22.953 asserts 152 152 152 0 n/a 00:11:22.953 00:11:22.953 Elapsed time = 1.208 seconds 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.213 rmmod nvme_tcp 00:11:23.213 rmmod nvme_fabrics 00:11:23.213 rmmod nvme_keyring 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 158815 ']' 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 158815 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 158815 ']' 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 158815 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 158815 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 158815' 00:11:23.213 killing process with pid 158815 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 158815 00:11:23.213 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 158815 00:11:23.474 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:23.474 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:23.474 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:23.475 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:23.475 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:23.475 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:23.475 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:23.475 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:23.475 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:23.475 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.475 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.475 20:12:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:26.028 00:11:26.028 real 0m6.487s 00:11:26.028 user 0m9.524s 00:11:26.028 sys 0m2.218s 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.028 ************************************ 00:11:26.028 END TEST nvmf_bdevio 00:11:26.028 ************************************ 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:26.028 00:11:26.028 real 3m55.322s 00:11:26.028 user 10m12.189s 00:11:26.028 sys 1m7.934s 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:26.028 ************************************ 00:11:26.028 END TEST nvmf_target_core 00:11:26.028 ************************************ 00:11:26.028 20:12:37 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:26.028 20:12:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:26.028 20:12:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.028 20:12:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:26.028 ************************************ 00:11:26.028 START TEST nvmf_target_extra 00:11:26.028 ************************************ 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:26.028 * Looking for test storage... 00:11:26.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.028 --rc genhtml_branch_coverage=1 00:11:26.028 --rc genhtml_function_coverage=1 00:11:26.028 --rc genhtml_legend=1 00:11:26.028 --rc geninfo_all_blocks=1 00:11:26.028 --rc geninfo_unexecuted_blocks=1 00:11:26.028 00:11:26.028 ' 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.028 --rc genhtml_branch_coverage=1 00:11:26.028 --rc genhtml_function_coverage=1 00:11:26.028 --rc genhtml_legend=1 00:11:26.028 --rc geninfo_all_blocks=1 00:11:26.028 --rc geninfo_unexecuted_blocks=1 00:11:26.028 00:11:26.028 ' 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.028 --rc genhtml_branch_coverage=1 00:11:26.028 --rc genhtml_function_coverage=1 00:11:26.028 --rc genhtml_legend=1 00:11:26.028 --rc geninfo_all_blocks=1 00:11:26.028 --rc geninfo_unexecuted_blocks=1 00:11:26.028 00:11:26.028 ' 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.028 --rc genhtml_branch_coverage=1 00:11:26.028 --rc genhtml_function_coverage=1 00:11:26.028 --rc genhtml_legend=1 00:11:26.028 --rc geninfo_all_blocks=1 00:11:26.028 --rc geninfo_unexecuted_blocks=1 00:11:26.028 00:11:26.028 ' 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.028 20:12:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.029 ************************************ 00:11:26.029 START TEST nvmf_example 00:11:26.029 ************************************ 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:26.029 * Looking for test storage... 00:11:26.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:26.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.029 --rc genhtml_branch_coverage=1 00:11:26.029 --rc genhtml_function_coverage=1 00:11:26.029 --rc genhtml_legend=1 00:11:26.029 --rc geninfo_all_blocks=1 00:11:26.029 --rc geninfo_unexecuted_blocks=1 00:11:26.029 00:11:26.029 ' 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:26.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.029 --rc genhtml_branch_coverage=1 00:11:26.029 --rc genhtml_function_coverage=1 00:11:26.029 --rc genhtml_legend=1 00:11:26.029 --rc geninfo_all_blocks=1 00:11:26.029 --rc geninfo_unexecuted_blocks=1 00:11:26.029 00:11:26.029 ' 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:26.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.029 --rc genhtml_branch_coverage=1 00:11:26.029 --rc genhtml_function_coverage=1 00:11:26.029 --rc genhtml_legend=1 00:11:26.029 --rc geninfo_all_blocks=1 00:11:26.029 --rc geninfo_unexecuted_blocks=1 00:11:26.029 00:11:26.029 ' 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:26.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.029 --rc genhtml_branch_coverage=1 00:11:26.029 --rc genhtml_function_coverage=1 00:11:26.029 --rc genhtml_legend=1 00:11:26.029 --rc geninfo_all_blocks=1 00:11:26.029 --rc geninfo_unexecuted_blocks=1 00:11:26.029 00:11:26.029 ' 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.029 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:26.030 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:28.576 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:28.576 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.576 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:28.577 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:28.577 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.577 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:28.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:11:28.577 00:11:28.577 --- 10.0.0.2 ping statistics --- 00:11:28.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.577 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:11:28.577 00:11:28.577 --- 10.0.0.1 ping statistics --- 00:11:28.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.577 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=161102 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 161102 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 161102 ']' 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.577 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:28.578 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:28.578 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.578 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.578 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.578 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.578 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.578 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.578 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.578 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:28.578 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:40.828 Initializing NVMe Controllers 00:11:40.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:40.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:40.828 Initialization complete. Launching workers. 00:11:40.828 ======================================================== 00:11:40.828 Latency(us) 00:11:40.828 Device Information : IOPS MiB/s Average min max 00:11:40.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14649.77 57.23 4368.26 586.46 15614.80 00:11:40.828 ======================================================== 00:11:40.828 Total : 14649.77 57.23 4368.26 586.46 15614.80 00:11:40.828 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:40.828 rmmod nvme_tcp 00:11:40.828 rmmod nvme_fabrics 00:11:40.828 rmmod nvme_keyring 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 161102 ']' 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 161102 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 161102 ']' 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 161102 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 161102 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 161102' 00:11:40.828 killing process with pid 161102 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 161102 00:11:40.828 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 161102 00:11:40.828 nvmf threads initialize successfully 00:11:40.828 bdev subsystem init successfully 00:11:40.828 created a nvmf target service 00:11:40.828 create targets's poll groups done 00:11:40.828 all subsystems of target started 00:11:40.828 nvmf target is running 00:11:40.828 all subsystems of target stopped 00:11:40.828 destroy targets's poll groups done 00:11:40.828 destroyed the nvmf target service 00:11:40.828 bdev subsystem finish successfully 00:11:40.828 nvmf threads destroy successfully 00:11:40.828 20:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:40.828 20:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:40.828 20:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:40.828 20:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:40.828 20:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:40.828 20:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:40.828 20:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:40.828 20:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.828 20:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:40.828 20:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.828 20:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.828 20:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.090 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:41.090 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:41.090 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:41.090 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.352 00:11:41.352 real 0m15.426s 00:11:41.352 user 0m42.232s 00:11:41.352 sys 0m3.411s 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.352 ************************************ 00:11:41.352 END TEST nvmf_example 00:11:41.352 ************************************ 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:41.352 ************************************ 00:11:41.352 START TEST nvmf_filesystem 00:11:41.352 ************************************ 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:41.352 * Looking for test storage... 00:11:41.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:41.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.352 --rc genhtml_branch_coverage=1 00:11:41.352 --rc genhtml_function_coverage=1 00:11:41.352 --rc genhtml_legend=1 00:11:41.352 --rc geninfo_all_blocks=1 00:11:41.352 --rc geninfo_unexecuted_blocks=1 00:11:41.352 00:11:41.352 ' 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:41.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.352 --rc genhtml_branch_coverage=1 00:11:41.352 --rc genhtml_function_coverage=1 00:11:41.352 --rc genhtml_legend=1 00:11:41.352 --rc geninfo_all_blocks=1 00:11:41.352 --rc geninfo_unexecuted_blocks=1 00:11:41.352 00:11:41.352 ' 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:41.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.352 --rc genhtml_branch_coverage=1 00:11:41.352 --rc genhtml_function_coverage=1 00:11:41.352 --rc genhtml_legend=1 00:11:41.352 --rc geninfo_all_blocks=1 00:11:41.352 --rc geninfo_unexecuted_blocks=1 00:11:41.352 00:11:41.352 ' 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:41.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.352 --rc genhtml_branch_coverage=1 00:11:41.352 --rc genhtml_function_coverage=1 00:11:41.352 --rc genhtml_legend=1 00:11:41.352 --rc geninfo_all_blocks=1 00:11:41.352 --rc geninfo_unexecuted_blocks=1 00:11:41.352 00:11:41.352 ' 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:41.352 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:41.353 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:41.354 #define SPDK_CONFIG_H 00:11:41.354 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:41.354 #define SPDK_CONFIG_APPS 1 00:11:41.354 #define SPDK_CONFIG_ARCH native 00:11:41.354 #undef SPDK_CONFIG_ASAN 00:11:41.354 #undef SPDK_CONFIG_AVAHI 00:11:41.354 #undef SPDK_CONFIG_CET 00:11:41.354 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:41.354 #define SPDK_CONFIG_COVERAGE 1 00:11:41.354 #define SPDK_CONFIG_CROSS_PREFIX 00:11:41.354 #undef SPDK_CONFIG_CRYPTO 00:11:41.354 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:41.354 #undef SPDK_CONFIG_CUSTOMOCF 00:11:41.354 #undef SPDK_CONFIG_DAOS 00:11:41.354 #define SPDK_CONFIG_DAOS_DIR 00:11:41.354 #define SPDK_CONFIG_DEBUG 1 00:11:41.354 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:41.354 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:41.354 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:41.354 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:41.354 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:41.354 #undef SPDK_CONFIG_DPDK_UADK 00:11:41.354 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:41.354 #define SPDK_CONFIG_EXAMPLES 1 00:11:41.354 #undef SPDK_CONFIG_FC 00:11:41.354 #define SPDK_CONFIG_FC_PATH 00:11:41.354 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:41.354 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:41.354 #define SPDK_CONFIG_FSDEV 1 00:11:41.354 #undef SPDK_CONFIG_FUSE 00:11:41.354 #undef SPDK_CONFIG_FUZZER 00:11:41.354 #define SPDK_CONFIG_FUZZER_LIB 00:11:41.354 #undef SPDK_CONFIG_GOLANG 00:11:41.354 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:41.354 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:41.354 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:41.354 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:41.354 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:41.354 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:41.354 #undef SPDK_CONFIG_HAVE_LZ4 00:11:41.354 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:41.354 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:41.354 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:41.354 #define SPDK_CONFIG_IDXD 1 00:11:41.354 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:41.354 #undef SPDK_CONFIG_IPSEC_MB 00:11:41.354 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:41.354 #define SPDK_CONFIG_ISAL 1 00:11:41.354 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:41.354 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:41.354 #define SPDK_CONFIG_LIBDIR 00:11:41.354 #undef SPDK_CONFIG_LTO 00:11:41.354 #define SPDK_CONFIG_MAX_LCORES 128 00:11:41.354 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:41.354 #define SPDK_CONFIG_NVME_CUSE 1 00:11:41.354 #undef SPDK_CONFIG_OCF 00:11:41.354 #define SPDK_CONFIG_OCF_PATH 00:11:41.354 #define SPDK_CONFIG_OPENSSL_PATH 00:11:41.354 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:41.354 #define SPDK_CONFIG_PGO_DIR 00:11:41.354 #undef SPDK_CONFIG_PGO_USE 00:11:41.354 #define SPDK_CONFIG_PREFIX /usr/local 00:11:41.354 #undef SPDK_CONFIG_RAID5F 00:11:41.354 #undef SPDK_CONFIG_RBD 00:11:41.354 #define SPDK_CONFIG_RDMA 1 00:11:41.354 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:41.354 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:41.354 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:41.354 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:41.354 #define SPDK_CONFIG_SHARED 1 00:11:41.354 #undef SPDK_CONFIG_SMA 00:11:41.354 #define SPDK_CONFIG_TESTS 1 00:11:41.354 #undef SPDK_CONFIG_TSAN 00:11:41.354 #define SPDK_CONFIG_UBLK 1 00:11:41.354 #define SPDK_CONFIG_UBSAN 1 00:11:41.354 #undef SPDK_CONFIG_UNIT_TESTS 00:11:41.354 #undef SPDK_CONFIG_URING 00:11:41.354 #define SPDK_CONFIG_URING_PATH 00:11:41.354 #undef SPDK_CONFIG_URING_ZNS 00:11:41.354 #undef SPDK_CONFIG_USDT 00:11:41.354 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:41.354 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:41.354 #define SPDK_CONFIG_VFIO_USER 1 00:11:41.354 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:41.354 #define SPDK_CONFIG_VHOST 1 00:11:41.354 #define SPDK_CONFIG_VIRTIO 1 00:11:41.354 #undef SPDK_CONFIG_VTUNE 00:11:41.354 #define SPDK_CONFIG_VTUNE_DIR 00:11:41.354 #define SPDK_CONFIG_WERROR 1 00:11:41.354 #define SPDK_CONFIG_WPDK_DIR 00:11:41.354 #undef SPDK_CONFIG_XNVME 00:11:41.354 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.354 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:41.355 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:41.356 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:41.621 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 162676 ]] 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 162676 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.tgUHSb 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.tgUHSb/tests/target /tmp/spdk.tgUHSb 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=54520918016 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988511744 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7467593728 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984224768 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994255872 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375273472 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397703168 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22429696 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993944576 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994255872 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=311296 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:41.622 * Looking for test storage... 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=54520918016 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9682186240 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:41.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.622 --rc genhtml_branch_coverage=1 00:11:41.622 --rc genhtml_function_coverage=1 00:11:41.622 --rc genhtml_legend=1 00:11:41.622 --rc geninfo_all_blocks=1 00:11:41.622 --rc geninfo_unexecuted_blocks=1 00:11:41.622 00:11:41.622 ' 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:41.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.622 --rc genhtml_branch_coverage=1 00:11:41.622 --rc genhtml_function_coverage=1 00:11:41.622 --rc genhtml_legend=1 00:11:41.622 --rc geninfo_all_blocks=1 00:11:41.622 --rc geninfo_unexecuted_blocks=1 00:11:41.622 00:11:41.622 ' 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:41.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.622 --rc genhtml_branch_coverage=1 00:11:41.622 --rc genhtml_function_coverage=1 00:11:41.622 --rc genhtml_legend=1 00:11:41.622 --rc geninfo_all_blocks=1 00:11:41.622 --rc geninfo_unexecuted_blocks=1 00:11:41.622 00:11:41.622 ' 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:41.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.622 --rc genhtml_branch_coverage=1 00:11:41.622 --rc genhtml_function_coverage=1 00:11:41.622 --rc genhtml_legend=1 00:11:41.622 --rc geninfo_all_blocks=1 00:11:41.622 --rc geninfo_unexecuted_blocks=1 00:11:41.622 00:11:41.622 ' 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.622 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:41.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:41.623 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:44.172 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:44.172 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.172 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:44.173 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:44.173 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:44.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:11:44.173 00:11:44.173 --- 10.0.0.2 ping statistics --- 00:11:44.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.173 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:11:44.173 00:11:44.173 --- 10.0.0.1 ping statistics --- 00:11:44.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.173 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.173 ************************************ 00:11:44.173 START TEST nvmf_filesystem_no_in_capsule 00:11:44.173 ************************************ 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=164435 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 164435 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 164435 ']' 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.173 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.173 [2024-11-18 20:12:55.932167] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:11:44.174 [2024-11-18 20:12:55.932254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.174 [2024-11-18 20:12:56.005793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.174 [2024-11-18 20:12:56.052172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.174 [2024-11-18 20:12:56.052224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.174 [2024-11-18 20:12:56.052253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.174 [2024-11-18 20:12:56.052264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.174 [2024-11-18 20:12:56.052274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.174 [2024-11-18 20:12:56.053861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.174 [2024-11-18 20:12:56.053954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.174 [2024-11-18 20:12:56.053887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.174 [2024-11-18 20:12:56.053957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.434 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.434 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:44.434 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.434 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.434 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.435 [2024-11-18 20:12:56.212145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.435 Malloc1 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.435 [2024-11-18 20:12:56.396587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:44.435 { 00:11:44.435 "name": "Malloc1", 00:11:44.435 "aliases": [ 00:11:44.435 "54324d8f-0cc9-4488-9717-5d520baeedc9" 00:11:44.435 ], 00:11:44.435 "product_name": "Malloc disk", 00:11:44.435 "block_size": 512, 00:11:44.435 "num_blocks": 1048576, 00:11:44.435 "uuid": "54324d8f-0cc9-4488-9717-5d520baeedc9", 00:11:44.435 "assigned_rate_limits": { 00:11:44.435 "rw_ios_per_sec": 0, 00:11:44.435 "rw_mbytes_per_sec": 0, 00:11:44.435 "r_mbytes_per_sec": 0, 00:11:44.435 "w_mbytes_per_sec": 0 00:11:44.435 }, 00:11:44.435 "claimed": true, 00:11:44.435 "claim_type": "exclusive_write", 00:11:44.435 "zoned": false, 00:11:44.435 "supported_io_types": { 00:11:44.435 "read": true, 00:11:44.435 "write": true, 00:11:44.435 "unmap": true, 00:11:44.435 "flush": true, 00:11:44.435 "reset": true, 00:11:44.435 "nvme_admin": false, 00:11:44.435 "nvme_io": false, 00:11:44.435 "nvme_io_md": false, 00:11:44.435 "write_zeroes": true, 00:11:44.435 "zcopy": true, 00:11:44.435 "get_zone_info": false, 00:11:44.435 "zone_management": false, 00:11:44.435 "zone_append": false, 00:11:44.435 "compare": false, 00:11:44.435 "compare_and_write": false, 00:11:44.435 "abort": true, 00:11:44.435 "seek_hole": false, 00:11:44.435 "seek_data": false, 00:11:44.435 "copy": true, 00:11:44.435 "nvme_iov_md": false 00:11:44.435 }, 00:11:44.435 "memory_domains": [ 00:11:44.435 { 00:11:44.435 "dma_device_id": "system", 00:11:44.435 "dma_device_type": 1 00:11:44.435 }, 00:11:44.435 { 00:11:44.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.435 "dma_device_type": 2 00:11:44.435 } 00:11:44.435 ], 00:11:44.435 "driver_specific": {} 00:11:44.435 } 00:11:44.435 ]' 00:11:44.435 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:44.697 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:44.697 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:44.697 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:44.697 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:44.697 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:44.697 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:44.697 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.269 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.269 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:45.269 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.269 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:45.269 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:47.816 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:48.080 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:49.024 20:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:49.024 20:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:49.024 20:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:49.025 20:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.025 20:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.025 ************************************ 00:11:49.025 START TEST filesystem_ext4 00:11:49.025 ************************************ 00:11:49.025 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:49.025 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:49.025 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.025 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:49.025 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:49.025 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:49.025 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:49.025 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:49.025 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:49.025 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:49.025 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:49.284 mke2fs 1.47.0 (5-Feb-2023) 00:11:49.284 Discarding device blocks: 0/522240 done 00:11:49.284 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:49.284 Filesystem UUID: fb48641b-0574-43f4-b0f0-dc9a546b61e0 00:11:49.284 Superblock backups stored on blocks: 00:11:49.284 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:49.284 00:11:49.284 Allocating group tables: 0/64 done 00:11:49.284 Writing inode tables: 0/64 done 00:11:52.589 Creating journal (8192 blocks): done 00:11:54.362 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:11:54.362 00:11:54.362 20:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:54.362 20:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 164435 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:00.946 00:12:00.946 real 0m11.513s 00:12:00.946 user 0m0.018s 00:12:00.946 sys 0m0.113s 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:00.946 ************************************ 00:12:00.946 END TEST filesystem_ext4 00:12:00.946 ************************************ 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.946 ************************************ 00:12:00.946 START TEST filesystem_btrfs 00:12:00.946 ************************************ 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:00.946 btrfs-progs v6.8.1 00:12:00.946 See https://btrfs.readthedocs.io for more information. 00:12:00.946 00:12:00.946 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:00.946 NOTE: several default settings have changed in version 5.15, please make sure 00:12:00.946 this does not affect your deployments: 00:12:00.946 - DUP for metadata (-m dup) 00:12:00.946 - enabled no-holes (-O no-holes) 00:12:00.946 - enabled free-space-tree (-R free-space-tree) 00:12:00.946 00:12:00.946 Label: (null) 00:12:00.946 UUID: 4ac8fd9c-3842-4216-a6c0-9fa99ea8911f 00:12:00.946 Node size: 16384 00:12:00.946 Sector size: 4096 (CPU page size: 4096) 00:12:00.946 Filesystem size: 510.00MiB 00:12:00.946 Block group profiles: 00:12:00.946 Data: single 8.00MiB 00:12:00.946 Metadata: DUP 32.00MiB 00:12:00.946 System: DUP 8.00MiB 00:12:00.946 SSD detected: yes 00:12:00.946 Zoned device: no 00:12:00.946 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:00.946 Checksum: crc32c 00:12:00.946 Number of devices: 1 00:12:00.946 Devices: 00:12:00.946 ID SIZE PATH 00:12:00.946 1 510.00MiB /dev/nvme0n1p1 00:12:00.946 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:00.946 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 164435 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:01.892 00:12:01.892 real 0m1.170s 00:12:01.892 user 0m0.021s 00:12:01.892 sys 0m0.145s 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:01.892 ************************************ 00:12:01.892 END TEST filesystem_btrfs 00:12:01.892 ************************************ 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.892 ************************************ 00:12:01.892 START TEST filesystem_xfs 00:12:01.892 ************************************ 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:01.892 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:02.152 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:02.152 = sectsz=512 attr=2, projid32bit=1 00:12:02.152 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:02.152 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:02.152 data = bsize=4096 blocks=130560, imaxpct=25 00:12:02.152 = sunit=0 swidth=0 blks 00:12:02.152 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:02.152 log =internal log bsize=4096 blocks=16384, version=2 00:12:02.152 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:02.152 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:03.095 Discarding blocks...Done. 00:12:03.095 20:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:03.095 20:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 164435 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.393 00:12:06.393 real 0m4.237s 00:12:06.393 user 0m0.016s 00:12:06.393 sys 0m0.099s 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.393 ************************************ 00:12:06.393 END TEST filesystem_xfs 00:12:06.393 ************************************ 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.393 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.394 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.394 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.394 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:06.394 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 164435 00:12:06.394 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 164435 ']' 00:12:06.394 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 164435 00:12:06.394 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:06.394 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.394 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 164435 00:12:06.653 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.653 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.653 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 164435' 00:12:06.653 killing process with pid 164435 00:12:06.653 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 164435 00:12:06.653 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 164435 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:06.913 00:12:06.913 real 0m22.925s 00:12:06.913 user 1m28.957s 00:12:06.913 sys 0m2.918s 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.913 ************************************ 00:12:06.913 END TEST nvmf_filesystem_no_in_capsule 00:12:06.913 ************************************ 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:06.913 ************************************ 00:12:06.913 START TEST nvmf_filesystem_in_capsule 00:12:06.913 ************************************ 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=167347 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 167347 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 167347 ']' 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.913 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.913 [2024-11-18 20:13:18.915705] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:12:06.913 [2024-11-18 20:13:18.915793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.172 [2024-11-18 20:13:18.985993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.172 [2024-11-18 20:13:19.028354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.173 [2024-11-18 20:13:19.028411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.173 [2024-11-18 20:13:19.028439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.173 [2024-11-18 20:13:19.028450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.173 [2024-11-18 20:13:19.028459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.173 [2024-11-18 20:13:19.029861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.173 [2024-11-18 20:13:19.029918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.173 [2024-11-18 20:13:19.029986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.173 [2024-11-18 20:13:19.029989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.173 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.173 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:07.173 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:07.173 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.173 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.173 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.173 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:07.173 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:07.173 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.173 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.173 [2024-11-18 20:13:19.167715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.173 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.173 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:07.173 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.173 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.438 Malloc1 00:12:07.438 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.439 [2024-11-18 20:13:19.336208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.439 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:07.439 { 00:12:07.439 "name": "Malloc1", 00:12:07.439 "aliases": [ 00:12:07.439 "21516844-a9b1-48b2-8882-b30f52a91b5c" 00:12:07.439 ], 00:12:07.439 "product_name": "Malloc disk", 00:12:07.439 "block_size": 512, 00:12:07.439 "num_blocks": 1048576, 00:12:07.439 "uuid": "21516844-a9b1-48b2-8882-b30f52a91b5c", 00:12:07.439 "assigned_rate_limits": { 00:12:07.439 "rw_ios_per_sec": 0, 00:12:07.439 "rw_mbytes_per_sec": 0, 00:12:07.439 "r_mbytes_per_sec": 0, 00:12:07.439 "w_mbytes_per_sec": 0 00:12:07.439 }, 00:12:07.439 "claimed": true, 00:12:07.439 "claim_type": "exclusive_write", 00:12:07.439 "zoned": false, 00:12:07.439 "supported_io_types": { 00:12:07.440 "read": true, 00:12:07.440 "write": true, 00:12:07.440 "unmap": true, 00:12:07.440 "flush": true, 00:12:07.440 "reset": true, 00:12:07.440 "nvme_admin": false, 00:12:07.440 "nvme_io": false, 00:12:07.440 "nvme_io_md": false, 00:12:07.440 "write_zeroes": true, 00:12:07.440 "zcopy": true, 00:12:07.440 "get_zone_info": false, 00:12:07.440 "zone_management": false, 00:12:07.440 "zone_append": false, 00:12:07.440 "compare": false, 00:12:07.440 "compare_and_write": false, 00:12:07.440 "abort": true, 00:12:07.440 "seek_hole": false, 00:12:07.440 "seek_data": false, 00:12:07.440 "copy": true, 00:12:07.440 "nvme_iov_md": false 00:12:07.440 }, 00:12:07.440 "memory_domains": [ 00:12:07.440 { 00:12:07.440 "dma_device_id": "system", 00:12:07.440 "dma_device_type": 1 00:12:07.440 }, 00:12:07.440 { 00:12:07.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.440 "dma_device_type": 2 00:12:07.440 } 00:12:07.440 ], 00:12:07.440 "driver_specific": {} 00:12:07.440 } 00:12:07.440 ]' 00:12:07.440 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:07.440 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:07.440 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:07.440 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:07.440 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:07.440 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:07.440 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:07.440 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.385 20:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.385 20:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:08.385 20:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.385 20:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:08.385 20:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:10.300 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:10.563 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:10.825 20:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.773 ************************************ 00:12:11.773 START TEST filesystem_in_capsule_ext4 00:12:11.773 ************************************ 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:11.773 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:11.773 mke2fs 1.47.0 (5-Feb-2023) 00:12:12.035 Discarding device blocks: 0/522240 done 00:12:12.035 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:12.035 Filesystem UUID: 2c9ea865-0057-4dcd-9ce8-6af106cf7fea 00:12:12.035 Superblock backups stored on blocks: 00:12:12.035 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:12.035 00:12:12.035 Allocating group tables: 0/64 done 00:12:12.035 Writing inode tables: 0/64 done 00:12:15.335 Creating journal (8192 blocks): done 00:12:17.217 Writing superblocks and filesystem accounting information: 0/64 done 00:12:17.217 00:12:17.218 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:17.218 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:23.797 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 167347 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:23.797 00:12:23.797 real 0m11.413s 00:12:23.797 user 0m0.026s 00:12:23.797 sys 0m0.054s 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:23.797 ************************************ 00:12:23.797 END TEST filesystem_in_capsule_ext4 00:12:23.797 ************************************ 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.797 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.797 ************************************ 00:12:23.797 START TEST filesystem_in_capsule_btrfs 00:12:23.797 ************************************ 00:12:23.798 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:23.798 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:23.798 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:23.798 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:23.798 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:23.798 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:23.798 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:23.798 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:23.798 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:23.798 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:23.798 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:23.798 btrfs-progs v6.8.1 00:12:23.798 See https://btrfs.readthedocs.io for more information. 00:12:23.798 00:12:23.798 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:23.798 NOTE: several default settings have changed in version 5.15, please make sure 00:12:23.798 this does not affect your deployments: 00:12:23.798 - DUP for metadata (-m dup) 00:12:23.798 - enabled no-holes (-O no-holes) 00:12:23.798 - enabled free-space-tree (-R free-space-tree) 00:12:23.798 00:12:23.798 Label: (null) 00:12:23.798 UUID: ecfad062-c10c-4224-a1c6-b87a100e0f0b 00:12:23.798 Node size: 16384 00:12:23.798 Sector size: 4096 (CPU page size: 4096) 00:12:23.798 Filesystem size: 510.00MiB 00:12:23.798 Block group profiles: 00:12:23.798 Data: single 8.00MiB 00:12:23.798 Metadata: DUP 32.00MiB 00:12:23.798 System: DUP 8.00MiB 00:12:23.798 SSD detected: yes 00:12:23.798 Zoned device: no 00:12:23.798 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:23.798 Checksum: crc32c 00:12:23.798 Number of devices: 1 00:12:23.798 Devices: 00:12:23.798 ID SIZE PATH 00:12:23.798 1 510.00MiB /dev/nvme0n1p1 00:12:23.798 00:12:23.798 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:23.798 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 167347 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:24.367 00:12:24.367 real 0m1.082s 00:12:24.367 user 0m0.017s 00:12:24.367 sys 0m0.102s 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:24.367 ************************************ 00:12:24.367 END TEST filesystem_in_capsule_btrfs 00:12:24.367 ************************************ 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.367 ************************************ 00:12:24.367 START TEST filesystem_in_capsule_xfs 00:12:24.367 ************************************ 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:24.367 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:24.368 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:24.368 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:24.368 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:24.368 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:24.368 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:24.368 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:24.368 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:24.368 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:24.368 = sectsz=512 attr=2, projid32bit=1 00:12:24.368 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:24.368 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:24.368 data = bsize=4096 blocks=130560, imaxpct=25 00:12:24.368 = sunit=0 swidth=0 blks 00:12:24.368 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:24.368 log =internal log bsize=4096 blocks=16384, version=2 00:12:24.368 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:24.368 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:25.757 Discarding blocks...Done. 00:12:25.757 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:25.757 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:27.140 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:27.140 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:27.140 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:27.140 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:27.140 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:27.140 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:27.140 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 167347 00:12:27.140 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:27.140 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:27.398 00:12:27.398 real 0m2.908s 00:12:27.398 user 0m0.022s 00:12:27.398 sys 0m0.054s 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:27.398 ************************************ 00:12:27.398 END TEST filesystem_in_capsule_xfs 00:12:27.398 ************************************ 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 167347 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 167347 ']' 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 167347 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.398 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167347 00:12:27.658 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.658 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.658 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167347' 00:12:27.658 killing process with pid 167347 00:12:27.658 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 167347 00:12:27.658 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 167347 00:12:27.919 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:27.919 00:12:27.919 real 0m20.994s 00:12:27.919 user 1m21.452s 00:12:27.919 sys 0m2.505s 00:12:27.919 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.919 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.919 ************************************ 00:12:27.919 END TEST nvmf_filesystem_in_capsule 00:12:27.919 ************************************ 00:12:27.919 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:27.919 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:27.919 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:27.919 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:27.919 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:27.919 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:27.919 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:27.919 rmmod nvme_tcp 00:12:27.919 rmmod nvme_fabrics 00:12:27.919 rmmod nvme_keyring 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.178 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.090 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:30.090 00:12:30.090 real 0m48.834s 00:12:30.090 user 2m51.510s 00:12:30.090 sys 0m7.251s 00:12:30.090 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.090 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:30.090 ************************************ 00:12:30.090 END TEST nvmf_filesystem 00:12:30.090 ************************************ 00:12:30.090 20:13:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:30.090 20:13:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:30.090 20:13:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.090 20:13:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.090 ************************************ 00:12:30.090 START TEST nvmf_target_discovery 00:12:30.090 ************************************ 00:12:30.090 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:30.350 * Looking for test storage... 00:12:30.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.350 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:30.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.351 --rc genhtml_branch_coverage=1 00:12:30.351 --rc genhtml_function_coverage=1 00:12:30.351 --rc genhtml_legend=1 00:12:30.351 --rc geninfo_all_blocks=1 00:12:30.351 --rc geninfo_unexecuted_blocks=1 00:12:30.351 00:12:30.351 ' 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:30.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.351 --rc genhtml_branch_coverage=1 00:12:30.351 --rc genhtml_function_coverage=1 00:12:30.351 --rc genhtml_legend=1 00:12:30.351 --rc geninfo_all_blocks=1 00:12:30.351 --rc geninfo_unexecuted_blocks=1 00:12:30.351 00:12:30.351 ' 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:30.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.351 --rc genhtml_branch_coverage=1 00:12:30.351 --rc genhtml_function_coverage=1 00:12:30.351 --rc genhtml_legend=1 00:12:30.351 --rc geninfo_all_blocks=1 00:12:30.351 --rc geninfo_unexecuted_blocks=1 00:12:30.351 00:12:30.351 ' 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:30.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.351 --rc genhtml_branch_coverage=1 00:12:30.351 --rc genhtml_function_coverage=1 00:12:30.351 --rc genhtml_legend=1 00:12:30.351 --rc geninfo_all_blocks=1 00:12:30.351 --rc geninfo_unexecuted_blocks=1 00:12:30.351 00:12:30.351 ' 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.351 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.352 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.352 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:30.352 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:30.352 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.352 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.889 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.889 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.889 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.889 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.889 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.889 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.889 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.889 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.889 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.889 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:32.889 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.889 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:32.889 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.889 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:32.890 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:32.890 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:32.890 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:32.890 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:32.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:12:32.890 00:12:32.890 --- 10.0.0.2 ping statistics --- 00:12:32.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.890 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:12:32.890 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:12:32.890 00:12:32.890 --- 10.0.0.1 ping statistics --- 00:12:32.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.891 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=171918 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 171918 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 171918 ']' 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.891 [2024-11-18 20:13:44.623549] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:12:32.891 [2024-11-18 20:13:44.623648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.891 [2024-11-18 20:13:44.696116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.891 [2024-11-18 20:13:44.744885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.891 [2024-11-18 20:13:44.744952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.891 [2024-11-18 20:13:44.744965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.891 [2024-11-18 20:13:44.744991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.891 [2024-11-18 20:13:44.745001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.891 [2024-11-18 20:13:44.746584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.891 [2024-11-18 20:13:44.746653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.891 [2024-11-18 20:13:44.746715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.891 [2024-11-18 20:13:44.746719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.891 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.891 [2024-11-18 20:13:44.895054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.153 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.153 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:33.153 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:33.153 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:33.153 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.153 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.153 Null1 00:12:33.153 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.153 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:33.153 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.153 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.153 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.153 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:33.153 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.153 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 [2024-11-18 20:13:44.939358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 Null2 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 Null3 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 Null4 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:33.415 00:12:33.415 Discovery Log Number of Records 6, Generation counter 6 00:12:33.415 =====Discovery Log Entry 0====== 00:12:33.415 trtype: tcp 00:12:33.415 adrfam: ipv4 00:12:33.415 subtype: current discovery subsystem 00:12:33.415 treq: not required 00:12:33.415 portid: 0 00:12:33.415 trsvcid: 4420 00:12:33.415 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:33.415 traddr: 10.0.0.2 00:12:33.415 eflags: explicit discovery connections, duplicate discovery information 00:12:33.415 sectype: none 00:12:33.415 =====Discovery Log Entry 1====== 00:12:33.415 trtype: tcp 00:12:33.415 adrfam: ipv4 00:12:33.415 subtype: nvme subsystem 00:12:33.415 treq: not required 00:12:33.415 portid: 0 00:12:33.415 trsvcid: 4420 00:12:33.415 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:33.415 traddr: 10.0.0.2 00:12:33.415 eflags: none 00:12:33.415 sectype: none 00:12:33.415 =====Discovery Log Entry 2====== 00:12:33.415 trtype: tcp 00:12:33.415 adrfam: ipv4 00:12:33.415 subtype: nvme subsystem 00:12:33.415 treq: not required 00:12:33.415 portid: 0 00:12:33.415 trsvcid: 4420 00:12:33.415 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:33.415 traddr: 10.0.0.2 00:12:33.415 eflags: none 00:12:33.415 sectype: none 00:12:33.415 =====Discovery Log Entry 3====== 00:12:33.415 trtype: tcp 00:12:33.415 adrfam: ipv4 00:12:33.415 subtype: nvme subsystem 00:12:33.415 treq: not required 00:12:33.415 portid: 0 00:12:33.415 trsvcid: 4420 00:12:33.415 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:33.415 traddr: 10.0.0.2 00:12:33.415 eflags: none 00:12:33.415 sectype: none 00:12:33.415 =====Discovery Log Entry 4====== 00:12:33.415 trtype: tcp 00:12:33.415 adrfam: ipv4 00:12:33.415 subtype: nvme subsystem 00:12:33.415 treq: not required 00:12:33.415 portid: 0 00:12:33.415 trsvcid: 4420 00:12:33.415 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:33.415 traddr: 10.0.0.2 00:12:33.415 eflags: none 00:12:33.415 sectype: none 00:12:33.415 =====Discovery Log Entry 5====== 00:12:33.415 trtype: tcp 00:12:33.415 adrfam: ipv4 00:12:33.415 subtype: discovery subsystem referral 00:12:33.415 treq: not required 00:12:33.415 portid: 0 00:12:33.415 trsvcid: 4430 00:12:33.415 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:33.415 traddr: 10.0.0.2 00:12:33.415 eflags: none 00:12:33.415 sectype: none 00:12:33.415 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:33.416 Perform nvmf subsystem discovery via RPC 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.416 [ 00:12:33.416 { 00:12:33.416 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:33.416 "subtype": "Discovery", 00:12:33.416 "listen_addresses": [ 00:12:33.416 { 00:12:33.416 "trtype": "TCP", 00:12:33.416 "adrfam": "IPv4", 00:12:33.416 "traddr": "10.0.0.2", 00:12:33.416 "trsvcid": "4420" 00:12:33.416 } 00:12:33.416 ], 00:12:33.416 "allow_any_host": true, 00:12:33.416 "hosts": [] 00:12:33.416 }, 00:12:33.416 { 00:12:33.416 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.416 "subtype": "NVMe", 00:12:33.416 "listen_addresses": [ 00:12:33.416 { 00:12:33.416 "trtype": "TCP", 00:12:33.416 "adrfam": "IPv4", 00:12:33.416 "traddr": "10.0.0.2", 00:12:33.416 "trsvcid": "4420" 00:12:33.416 } 00:12:33.416 ], 00:12:33.416 "allow_any_host": true, 00:12:33.416 "hosts": [], 00:12:33.416 "serial_number": "SPDK00000000000001", 00:12:33.416 "model_number": "SPDK bdev Controller", 00:12:33.416 "max_namespaces": 32, 00:12:33.416 "min_cntlid": 1, 00:12:33.416 "max_cntlid": 65519, 00:12:33.416 "namespaces": [ 00:12:33.416 { 00:12:33.416 "nsid": 1, 00:12:33.416 "bdev_name": "Null1", 00:12:33.416 "name": "Null1", 00:12:33.416 "nguid": "EA0644F2AC3F44CBB838406A7FDB92C6", 00:12:33.416 "uuid": "ea0644f2-ac3f-44cb-b838-406a7fdb92c6" 00:12:33.416 } 00:12:33.416 ] 00:12:33.416 }, 00:12:33.416 { 00:12:33.416 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:33.416 "subtype": "NVMe", 00:12:33.416 "listen_addresses": [ 00:12:33.416 { 00:12:33.416 "trtype": "TCP", 00:12:33.416 "adrfam": "IPv4", 00:12:33.416 "traddr": "10.0.0.2", 00:12:33.416 "trsvcid": "4420" 00:12:33.416 } 00:12:33.416 ], 00:12:33.416 "allow_any_host": true, 00:12:33.416 "hosts": [], 00:12:33.416 "serial_number": "SPDK00000000000002", 00:12:33.416 "model_number": "SPDK bdev Controller", 00:12:33.416 "max_namespaces": 32, 00:12:33.416 "min_cntlid": 1, 00:12:33.416 "max_cntlid": 65519, 00:12:33.416 "namespaces": [ 00:12:33.416 { 00:12:33.416 "nsid": 1, 00:12:33.416 "bdev_name": "Null2", 00:12:33.416 "name": "Null2", 00:12:33.416 "nguid": "06770F586BC944A09EBF73C0DA18A26C", 00:12:33.416 "uuid": "06770f58-6bc9-44a0-9ebf-73c0da18a26c" 00:12:33.416 } 00:12:33.416 ] 00:12:33.416 }, 00:12:33.416 { 00:12:33.416 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:33.416 "subtype": "NVMe", 00:12:33.416 "listen_addresses": [ 00:12:33.416 { 00:12:33.416 "trtype": "TCP", 00:12:33.416 "adrfam": "IPv4", 00:12:33.416 "traddr": "10.0.0.2", 00:12:33.416 "trsvcid": "4420" 00:12:33.416 } 00:12:33.416 ], 00:12:33.416 "allow_any_host": true, 00:12:33.416 "hosts": [], 00:12:33.416 "serial_number": "SPDK00000000000003", 00:12:33.416 "model_number": "SPDK bdev Controller", 00:12:33.416 "max_namespaces": 32, 00:12:33.416 "min_cntlid": 1, 00:12:33.416 "max_cntlid": 65519, 00:12:33.416 "namespaces": [ 00:12:33.416 { 00:12:33.416 "nsid": 1, 00:12:33.416 "bdev_name": "Null3", 00:12:33.416 "name": "Null3", 00:12:33.416 "nguid": "7D43562EBB5643E8BB1350E81BEB2B1E", 00:12:33.416 "uuid": "7d43562e-bb56-43e8-bb13-50e81beb2b1e" 00:12:33.416 } 00:12:33.416 ] 00:12:33.416 }, 00:12:33.416 { 00:12:33.416 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:33.416 "subtype": "NVMe", 00:12:33.416 "listen_addresses": [ 00:12:33.416 { 00:12:33.416 "trtype": "TCP", 00:12:33.416 "adrfam": "IPv4", 00:12:33.416 "traddr": "10.0.0.2", 00:12:33.416 "trsvcid": "4420" 00:12:33.416 } 00:12:33.416 ], 00:12:33.416 "allow_any_host": true, 00:12:33.416 "hosts": [], 00:12:33.416 "serial_number": "SPDK00000000000004", 00:12:33.416 "model_number": "SPDK bdev Controller", 00:12:33.416 "max_namespaces": 32, 00:12:33.416 "min_cntlid": 1, 00:12:33.416 "max_cntlid": 65519, 00:12:33.416 "namespaces": [ 00:12:33.416 { 00:12:33.416 "nsid": 1, 00:12:33.416 "bdev_name": "Null4", 00:12:33.416 "name": "Null4", 00:12:33.416 "nguid": "1669EFA9D5F74CE0A07C6BBD84C399CD", 00:12:33.416 "uuid": "1669efa9-d5f7-4ce0-a07c-6bbd84c399cd" 00:12:33.416 } 00:12:33.416 ] 00:12:33.416 } 00:12:33.416 ] 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.416 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.417 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.417 rmmod nvme_tcp 00:12:33.417 rmmod nvme_fabrics 00:12:33.677 rmmod nvme_keyring 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 171918 ']' 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 171918 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 171918 ']' 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 171918 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 171918 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 171918' 00:12:33.677 killing process with pid 171918 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 171918 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 171918 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.677 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:36.227 00:12:36.227 real 0m5.660s 00:12:36.227 user 0m4.670s 00:12:36.227 sys 0m1.928s 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.227 ************************************ 00:12:36.227 END TEST nvmf_target_discovery 00:12:36.227 ************************************ 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.227 ************************************ 00:12:36.227 START TEST nvmf_referrals 00:12:36.227 ************************************ 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:36.227 * Looking for test storage... 00:12:36.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:36.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.227 --rc genhtml_branch_coverage=1 00:12:36.227 --rc genhtml_function_coverage=1 00:12:36.227 --rc genhtml_legend=1 00:12:36.227 --rc geninfo_all_blocks=1 00:12:36.227 --rc geninfo_unexecuted_blocks=1 00:12:36.227 00:12:36.227 ' 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:36.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.227 --rc genhtml_branch_coverage=1 00:12:36.227 --rc genhtml_function_coverage=1 00:12:36.227 --rc genhtml_legend=1 00:12:36.227 --rc geninfo_all_blocks=1 00:12:36.227 --rc geninfo_unexecuted_blocks=1 00:12:36.227 00:12:36.227 ' 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:36.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.227 --rc genhtml_branch_coverage=1 00:12:36.227 --rc genhtml_function_coverage=1 00:12:36.227 --rc genhtml_legend=1 00:12:36.227 --rc geninfo_all_blocks=1 00:12:36.227 --rc geninfo_unexecuted_blocks=1 00:12:36.227 00:12:36.227 ' 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:36.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.227 --rc genhtml_branch_coverage=1 00:12:36.227 --rc genhtml_function_coverage=1 00:12:36.227 --rc genhtml_legend=1 00:12:36.227 --rc geninfo_all_blocks=1 00:12:36.227 --rc geninfo_unexecuted_blocks=1 00:12:36.227 00:12:36.227 ' 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.227 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.228 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:38.140 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:38.140 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:38.140 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:38.140 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.140 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:38.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:12:38.400 00:12:38.400 --- 10.0.0.2 ping statistics --- 00:12:38.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.400 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:12:38.400 00:12:38.400 --- 10.0.0.1 ping statistics --- 00:12:38.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.400 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=174016 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 174016 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 174016 ']' 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.400 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.400 [2024-11-18 20:13:50.283467] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:12:38.400 [2024-11-18 20:13:50.283555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.400 [2024-11-18 20:13:50.352026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.400 [2024-11-18 20:13:50.396218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.400 [2024-11-18 20:13:50.396287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.400 [2024-11-18 20:13:50.396315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.400 [2024-11-18 20:13:50.396326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.400 [2024-11-18 20:13:50.396335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.400 [2024-11-18 20:13:50.397834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.400 [2024-11-18 20:13:50.397896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.400 [2024-11-18 20:13:50.398025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.400 [2024-11-18 20:13:50.398028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.660 [2024-11-18 20:13:50.579816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.660 [2024-11-18 20:13:50.592066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:38.660 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.920 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.180 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:39.180 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:39.180 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:39.180 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:39.180 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.181 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:39.181 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.181 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.440 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:39.700 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:39.700 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:39.700 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:39.700 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:39.700 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.700 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:39.960 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:40.221 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:40.221 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:40.221 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:40.221 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:40.221 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:40.221 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.221 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:40.221 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:40.221 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:40.221 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:40.221 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:40.221 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.221 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.483 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.743 rmmod nvme_tcp 00:12:40.743 rmmod nvme_fabrics 00:12:40.743 rmmod nvme_keyring 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 174016 ']' 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 174016 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 174016 ']' 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 174016 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 174016 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 174016' 00:12:40.743 killing process with pid 174016 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 174016 00:12:40.743 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 174016 00:12:41.002 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:41.002 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:41.002 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:41.002 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:41.002 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:41.002 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:41.002 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:41.002 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:41.002 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:41.002 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.002 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.002 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.554 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:43.554 00:12:43.554 real 0m7.173s 00:12:43.554 user 0m11.483s 00:12:43.554 sys 0m2.344s 00:12:43.554 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.554 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.554 ************************************ 00:12:43.554 END TEST nvmf_referrals 00:12:43.554 ************************************ 00:12:43.554 20:13:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:43.554 20:13:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:43.554 20:13:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.554 20:13:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.554 ************************************ 00:12:43.554 START TEST nvmf_connect_disconnect 00:12:43.554 ************************************ 00:12:43.554 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:43.554 * Looking for test storage... 00:12:43.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:43.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.554 --rc genhtml_branch_coverage=1 00:12:43.554 --rc genhtml_function_coverage=1 00:12:43.554 --rc genhtml_legend=1 00:12:43.554 --rc geninfo_all_blocks=1 00:12:43.554 --rc geninfo_unexecuted_blocks=1 00:12:43.554 00:12:43.554 ' 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:43.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.554 --rc genhtml_branch_coverage=1 00:12:43.554 --rc genhtml_function_coverage=1 00:12:43.554 --rc genhtml_legend=1 00:12:43.554 --rc geninfo_all_blocks=1 00:12:43.554 --rc geninfo_unexecuted_blocks=1 00:12:43.554 00:12:43.554 ' 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:43.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.554 --rc genhtml_branch_coverage=1 00:12:43.554 --rc genhtml_function_coverage=1 00:12:43.554 --rc genhtml_legend=1 00:12:43.554 --rc geninfo_all_blocks=1 00:12:43.554 --rc geninfo_unexecuted_blocks=1 00:12:43.554 00:12:43.554 ' 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:43.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.554 --rc genhtml_branch_coverage=1 00:12:43.554 --rc genhtml_function_coverage=1 00:12:43.554 --rc genhtml_legend=1 00:12:43.554 --rc geninfo_all_blocks=1 00:12:43.554 --rc geninfo_unexecuted_blocks=1 00:12:43.554 00:12:43.554 ' 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.554 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:43.555 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:45.468 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:45.468 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:45.468 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:45.468 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:45.468 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.469 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:12:45.729 00:12:45.729 --- 10.0.0.2 ping statistics --- 00:12:45.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.729 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:12:45.729 00:12:45.729 --- 10.0.0.1 ping statistics --- 00:12:45.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.729 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=176439 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 176439 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 176439 ']' 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.729 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.729 [2024-11-18 20:13:57.620838] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:12:45.729 [2024-11-18 20:13:57.620916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.729 [2024-11-18 20:13:57.691410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.729 [2024-11-18 20:13:57.735466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.729 [2024-11-18 20:13:57.735517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.729 [2024-11-18 20:13:57.735546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.729 [2024-11-18 20:13:57.735558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.729 [2024-11-18 20:13:57.735568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.990 [2024-11-18 20:13:57.737182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.990 [2024-11-18 20:13:57.737243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.990 [2024-11-18 20:13:57.737307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.990 [2024-11-18 20:13:57.737310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.990 [2024-11-18 20:13:57.885359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.990 [2024-11-18 20:13:57.956118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:45.990 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:48.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.688 [2024-11-18 20:17:14.541389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2115e30 is same with the state(6) to be set 00:16:02.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:37.664 rmmod nvme_tcp 00:16:37.664 rmmod nvme_fabrics 00:16:37.664 rmmod nvme_keyring 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 176439 ']' 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 176439 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 176439 ']' 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 176439 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 176439 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 176439' 00:16:37.664 killing process with pid 176439 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 176439 00:16:37.664 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 176439 00:16:37.923 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:37.924 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:37.924 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:37.924 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:37.924 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:37.924 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:37.924 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:37.924 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:37.924 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:37.924 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.924 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.924 20:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.469 20:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:40.469 00:16:40.469 real 3m56.938s 00:16:40.469 user 15m0.671s 00:16:40.469 sys 0m36.830s 00:16:40.469 20:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.469 20:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:40.469 ************************************ 00:16:40.469 END TEST nvmf_connect_disconnect 00:16:40.469 ************************************ 00:16:40.469 20:17:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:40.469 20:17:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:40.469 20:17:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.469 20:17:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:40.469 ************************************ 00:16:40.469 START TEST nvmf_multitarget 00:16:40.469 ************************************ 00:16:40.469 20:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:40.469 * Looking for test storage... 00:16:40.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:40.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.469 --rc genhtml_branch_coverage=1 00:16:40.469 --rc genhtml_function_coverage=1 00:16:40.469 --rc genhtml_legend=1 00:16:40.469 --rc geninfo_all_blocks=1 00:16:40.469 --rc geninfo_unexecuted_blocks=1 00:16:40.469 00:16:40.469 ' 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:40.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.469 --rc genhtml_branch_coverage=1 00:16:40.469 --rc genhtml_function_coverage=1 00:16:40.469 --rc genhtml_legend=1 00:16:40.469 --rc geninfo_all_blocks=1 00:16:40.469 --rc geninfo_unexecuted_blocks=1 00:16:40.469 00:16:40.469 ' 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:40.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.469 --rc genhtml_branch_coverage=1 00:16:40.469 --rc genhtml_function_coverage=1 00:16:40.469 --rc genhtml_legend=1 00:16:40.469 --rc geninfo_all_blocks=1 00:16:40.469 --rc geninfo_unexecuted_blocks=1 00:16:40.469 00:16:40.469 ' 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:40.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.469 --rc genhtml_branch_coverage=1 00:16:40.469 --rc genhtml_function_coverage=1 00:16:40.469 --rc genhtml_legend=1 00:16:40.469 --rc geninfo_all_blocks=1 00:16:40.469 --rc geninfo_unexecuted_blocks=1 00:16:40.469 00:16:40.469 ' 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:40.469 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:40.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:40.470 20:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:42.379 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:42.379 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:42.379 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:42.379 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:42.380 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:42.380 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:42.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:16:42.639 00:16:42.639 --- 10.0.0.2 ping statistics --- 00:16:42.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.639 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:42.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:16:42.639 00:16:42.639 --- 10.0.0.1 ping statistics --- 00:16:42.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.639 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=207553 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 207553 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 207553 ']' 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.639 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:42.639 [2024-11-18 20:17:54.591095] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:42.639 [2024-11-18 20:17:54.591169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.898 [2024-11-18 20:17:54.664890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:42.898 [2024-11-18 20:17:54.711047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.898 [2024-11-18 20:17:54.711097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.898 [2024-11-18 20:17:54.711120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.898 [2024-11-18 20:17:54.711131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.898 [2024-11-18 20:17:54.711140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.898 [2024-11-18 20:17:54.712560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.898 [2024-11-18 20:17:54.712625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.898 [2024-11-18 20:17:54.712692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:42.898 [2024-11-18 20:17:54.712696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.898 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.898 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:42.898 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:42.899 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:42.899 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:42.899 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.899 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:42.899 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:42.899 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:43.158 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:43.158 20:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:43.158 "nvmf_tgt_1" 00:16:43.158 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:43.416 "nvmf_tgt_2" 00:16:43.416 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:43.416 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:43.416 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:43.416 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:43.675 true 00:16:43.675 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:43.675 true 00:16:43.675 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:43.675 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:43.675 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:43.675 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:43.675 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:43.675 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:43.675 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:43.675 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:43.675 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:43.675 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:43.675 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:43.936 rmmod nvme_tcp 00:16:43.936 rmmod nvme_fabrics 00:16:43.936 rmmod nvme_keyring 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 207553 ']' 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 207553 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 207553 ']' 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 207553 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 207553 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 207553' 00:16:43.936 killing process with pid 207553 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 207553 00:16:43.936 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 207553 00:16:44.198 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:44.198 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:44.198 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:44.198 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:44.198 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:44.198 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:44.198 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:44.198 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:44.198 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:44.198 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.198 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.198 20:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.108 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:46.108 00:16:46.108 real 0m6.030s 00:16:46.108 user 0m6.653s 00:16:46.108 sys 0m2.075s 00:16:46.108 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.108 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:46.108 ************************************ 00:16:46.108 END TEST nvmf_multitarget 00:16:46.108 ************************************ 00:16:46.108 20:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:46.108 20:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:46.109 20:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.109 20:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.109 ************************************ 00:16:46.109 START TEST nvmf_rpc 00:16:46.109 ************************************ 00:16:46.109 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:46.368 * Looking for test storage... 00:16:46.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.368 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:46.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.369 --rc genhtml_branch_coverage=1 00:16:46.369 --rc genhtml_function_coverage=1 00:16:46.369 --rc genhtml_legend=1 00:16:46.369 --rc geninfo_all_blocks=1 00:16:46.369 --rc geninfo_unexecuted_blocks=1 00:16:46.369 00:16:46.369 ' 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:46.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.369 --rc genhtml_branch_coverage=1 00:16:46.369 --rc genhtml_function_coverage=1 00:16:46.369 --rc genhtml_legend=1 00:16:46.369 --rc geninfo_all_blocks=1 00:16:46.369 --rc geninfo_unexecuted_blocks=1 00:16:46.369 00:16:46.369 ' 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:46.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.369 --rc genhtml_branch_coverage=1 00:16:46.369 --rc genhtml_function_coverage=1 00:16:46.369 --rc genhtml_legend=1 00:16:46.369 --rc geninfo_all_blocks=1 00:16:46.369 --rc geninfo_unexecuted_blocks=1 00:16:46.369 00:16:46.369 ' 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:46.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.369 --rc genhtml_branch_coverage=1 00:16:46.369 --rc genhtml_function_coverage=1 00:16:46.369 --rc genhtml_legend=1 00:16:46.369 --rc geninfo_all_blocks=1 00:16:46.369 --rc geninfo_unexecuted_blocks=1 00:16:46.369 00:16:46.369 ' 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:46.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:46.369 20:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:48.905 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:48.906 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:48.906 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:48.906 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:48.906 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:48.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:16:48.906 00:16:48.906 --- 10.0.0.2 ping statistics --- 00:16:48.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.906 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:16:48.906 00:16:48.906 --- 10.0.0.1 ping statistics --- 00:16:48.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.906 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=209701 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 209701 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 209701 ']' 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.906 [2024-11-18 20:18:00.618960] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:48.906 [2024-11-18 20:18:00.619055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.906 [2024-11-18 20:18:00.695859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:48.906 [2024-11-18 20:18:00.744570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.906 [2024-11-18 20:18:00.744623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.906 [2024-11-18 20:18:00.744643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.906 [2024-11-18 20:18:00.744671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.906 [2024-11-18 20:18:00.744681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.906 [2024-11-18 20:18:00.746382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.906 [2024-11-18 20:18:00.746410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.906 [2024-11-18 20:18:00.746436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:48.906 [2024-11-18 20:18:00.746439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.906 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:48.907 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:48.907 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:48.907 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.907 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.907 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:48.907 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.907 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.907 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.907 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:48.907 "tick_rate": 2700000000, 00:16:48.907 "poll_groups": [ 00:16:48.907 { 00:16:48.907 "name": "nvmf_tgt_poll_group_000", 00:16:48.907 "admin_qpairs": 0, 00:16:48.907 "io_qpairs": 0, 00:16:48.907 "current_admin_qpairs": 0, 00:16:48.907 "current_io_qpairs": 0, 00:16:48.907 "pending_bdev_io": 0, 00:16:48.907 "completed_nvme_io": 0, 00:16:48.907 "transports": [] 00:16:48.907 }, 00:16:48.907 { 00:16:48.907 "name": "nvmf_tgt_poll_group_001", 00:16:48.907 "admin_qpairs": 0, 00:16:48.907 "io_qpairs": 0, 00:16:48.907 "current_admin_qpairs": 0, 00:16:48.907 "current_io_qpairs": 0, 00:16:48.907 "pending_bdev_io": 0, 00:16:48.907 "completed_nvme_io": 0, 00:16:48.907 "transports": [] 00:16:48.907 }, 00:16:48.907 { 00:16:48.907 "name": "nvmf_tgt_poll_group_002", 00:16:48.907 "admin_qpairs": 0, 00:16:48.907 "io_qpairs": 0, 00:16:48.907 "current_admin_qpairs": 0, 00:16:48.907 "current_io_qpairs": 0, 00:16:48.907 "pending_bdev_io": 0, 00:16:48.907 "completed_nvme_io": 0, 00:16:48.907 "transports": [] 00:16:48.907 }, 00:16:48.907 { 00:16:48.907 "name": "nvmf_tgt_poll_group_003", 00:16:48.907 "admin_qpairs": 0, 00:16:48.907 "io_qpairs": 0, 00:16:48.907 "current_admin_qpairs": 0, 00:16:48.907 "current_io_qpairs": 0, 00:16:48.907 "pending_bdev_io": 0, 00:16:48.907 "completed_nvme_io": 0, 00:16:48.907 "transports": [] 00:16:48.907 } 00:16:48.907 ] 00:16:48.907 }' 00:16:48.907 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:48.907 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:48.907 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:48.907 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:49.166 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:49.166 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:49.166 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:49.166 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:49.166 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.166 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.166 [2024-11-18 20:18:00.982487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.166 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.166 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:49.166 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.166 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:49.166 "tick_rate": 2700000000, 00:16:49.166 "poll_groups": [ 00:16:49.166 { 00:16:49.166 "name": "nvmf_tgt_poll_group_000", 00:16:49.166 "admin_qpairs": 0, 00:16:49.166 "io_qpairs": 0, 00:16:49.166 "current_admin_qpairs": 0, 00:16:49.166 "current_io_qpairs": 0, 00:16:49.166 "pending_bdev_io": 0, 00:16:49.166 "completed_nvme_io": 0, 00:16:49.166 "transports": [ 00:16:49.166 { 00:16:49.166 "trtype": "TCP" 00:16:49.166 } 00:16:49.166 ] 00:16:49.166 }, 00:16:49.166 { 00:16:49.166 "name": "nvmf_tgt_poll_group_001", 00:16:49.166 "admin_qpairs": 0, 00:16:49.166 "io_qpairs": 0, 00:16:49.166 "current_admin_qpairs": 0, 00:16:49.166 "current_io_qpairs": 0, 00:16:49.166 "pending_bdev_io": 0, 00:16:49.166 "completed_nvme_io": 0, 00:16:49.166 "transports": [ 00:16:49.166 { 00:16:49.166 "trtype": "TCP" 00:16:49.166 } 00:16:49.166 ] 00:16:49.166 }, 00:16:49.166 { 00:16:49.166 "name": "nvmf_tgt_poll_group_002", 00:16:49.166 "admin_qpairs": 0, 00:16:49.166 "io_qpairs": 0, 00:16:49.166 "current_admin_qpairs": 0, 00:16:49.166 "current_io_qpairs": 0, 00:16:49.166 "pending_bdev_io": 0, 00:16:49.166 "completed_nvme_io": 0, 00:16:49.166 "transports": [ 00:16:49.166 { 00:16:49.166 "trtype": "TCP" 00:16:49.166 } 00:16:49.166 ] 00:16:49.166 }, 00:16:49.166 { 00:16:49.166 "name": "nvmf_tgt_poll_group_003", 00:16:49.166 "admin_qpairs": 0, 00:16:49.166 "io_qpairs": 0, 00:16:49.166 "current_admin_qpairs": 0, 00:16:49.166 "current_io_qpairs": 0, 00:16:49.166 "pending_bdev_io": 0, 00:16:49.166 "completed_nvme_io": 0, 00:16:49.166 "transports": [ 00:16:49.166 { 00:16:49.166 "trtype": "TCP" 00:16:49.166 } 00:16:49.166 ] 00:16:49.166 } 00:16:49.166 ] 00:16:49.166 }' 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.166 Malloc1 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:49.166 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.167 [2024-11-18 20:18:01.146508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:49.167 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:49.167 [2024-11-18 20:18:01.169134] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:49.427 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:49.427 could not add new controller: failed to write to nvme-fabrics device 00:16:49.427 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:49.427 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:49.427 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:49.427 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:49.427 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.427 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.427 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.427 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.427 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.997 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:49.997 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:49.997 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.997 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:49.997 20:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:51.908 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:51.908 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:51.908 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:51.908 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:51.908 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.908 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:51.908 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:52.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:52.170 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:52.170 [2024-11-18 20:18:04.013347] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:52.170 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:52.170 could not add new controller: failed to write to nvme-fabrics device 00:16:52.170 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:52.170 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:52.170 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:52.170 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:52.170 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:52.170 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.170 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.170 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.170 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:52.741 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:52.741 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:52.741 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:52.741 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:52.741 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:55.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.280 [2024-11-18 20:18:06.887285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.280 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:55.539 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:55.539 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:55.539 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.539 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:55.539 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:58.073 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:58.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.074 [2024-11-18 20:18:09.606057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.074 20:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:58.335 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:58.335 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:58.335 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:58.335 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:58.335 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:00.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.875 [2024-11-18 20:18:12.413921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.875 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:01.137 20:18:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:01.137 20:18:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:01.137 20:18:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:01.137 20:18:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:01.137 20:18:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:03.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.684 [2024-11-18 20:18:15.235483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.684 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:03.945 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:03.945 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:03.945 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:03.945 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:03.945 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:05.856 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:05.857 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:05.857 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:06.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.116 20:18:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.116 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.116 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:06.116 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.116 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.116 [2024-11-18 20:18:18.011208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.116 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.116 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:06.116 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.116 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.116 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.116 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:06.116 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.116 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.116 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.116 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:06.683 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:06.683 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:06.683 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.683 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:06.683 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 [2024-11-18 20:18:20.786042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 [2024-11-18 20:18:20.834105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 [2024-11-18 20:18:20.882248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:09.224 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 [2024-11-18 20:18:20.930411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 [2024-11-18 20:18:20.978581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.225 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:09.225 "tick_rate": 2700000000, 00:17:09.225 "poll_groups": [ 00:17:09.225 { 00:17:09.225 "name": "nvmf_tgt_poll_group_000", 00:17:09.225 "admin_qpairs": 2, 00:17:09.225 "io_qpairs": 84, 00:17:09.225 "current_admin_qpairs": 0, 00:17:09.225 "current_io_qpairs": 0, 00:17:09.225 "pending_bdev_io": 0, 00:17:09.225 "completed_nvme_io": 198, 00:17:09.225 "transports": [ 00:17:09.225 { 00:17:09.225 "trtype": "TCP" 00:17:09.225 } 00:17:09.225 ] 00:17:09.225 }, 00:17:09.225 { 00:17:09.225 "name": "nvmf_tgt_poll_group_001", 00:17:09.225 "admin_qpairs": 2, 00:17:09.225 "io_qpairs": 84, 00:17:09.225 "current_admin_qpairs": 0, 00:17:09.225 "current_io_qpairs": 0, 00:17:09.225 "pending_bdev_io": 0, 00:17:09.225 "completed_nvme_io": 135, 00:17:09.225 "transports": [ 00:17:09.225 { 00:17:09.225 "trtype": "TCP" 00:17:09.225 } 00:17:09.225 ] 00:17:09.225 }, 00:17:09.225 { 00:17:09.225 "name": "nvmf_tgt_poll_group_002", 00:17:09.225 "admin_qpairs": 1, 00:17:09.225 "io_qpairs": 84, 00:17:09.225 "current_admin_qpairs": 0, 00:17:09.225 "current_io_qpairs": 0, 00:17:09.225 "pending_bdev_io": 0, 00:17:09.225 "completed_nvme_io": 135, 00:17:09.225 "transports": [ 00:17:09.225 { 00:17:09.225 "trtype": "TCP" 00:17:09.225 } 00:17:09.225 ] 00:17:09.225 }, 00:17:09.225 { 00:17:09.225 "name": "nvmf_tgt_poll_group_003", 00:17:09.225 "admin_qpairs": 2, 00:17:09.225 "io_qpairs": 84, 00:17:09.225 "current_admin_qpairs": 0, 00:17:09.225 "current_io_qpairs": 0, 00:17:09.225 "pending_bdev_io": 0, 00:17:09.225 "completed_nvme_io": 218, 00:17:09.225 "transports": [ 00:17:09.225 { 00:17:09.225 "trtype": "TCP" 00:17:09.225 } 00:17:09.225 ] 00:17:09.225 } 00:17:09.225 ] 00:17:09.225 }' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:09.225 rmmod nvme_tcp 00:17:09.225 rmmod nvme_fabrics 00:17:09.225 rmmod nvme_keyring 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 209701 ']' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 209701 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 209701 ']' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 209701 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 209701 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 209701' 00:17:09.225 killing process with pid 209701 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 209701 00:17:09.225 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 209701 00:17:09.484 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:09.484 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:09.484 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:09.484 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:09.484 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:09.484 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:09.484 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:09.484 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:09.484 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:09.484 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.484 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.484 20:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.026 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:12.026 00:17:12.026 real 0m25.413s 00:17:12.026 user 1m21.958s 00:17:12.026 sys 0m4.458s 00:17:12.026 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.027 ************************************ 00:17:12.027 END TEST nvmf_rpc 00:17:12.027 ************************************ 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:12.027 ************************************ 00:17:12.027 START TEST nvmf_invalid 00:17:12.027 ************************************ 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:12.027 * Looking for test storage... 00:17:12.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:12.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.027 --rc genhtml_branch_coverage=1 00:17:12.027 --rc genhtml_function_coverage=1 00:17:12.027 --rc genhtml_legend=1 00:17:12.027 --rc geninfo_all_blocks=1 00:17:12.027 --rc geninfo_unexecuted_blocks=1 00:17:12.027 00:17:12.027 ' 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:12.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.027 --rc genhtml_branch_coverage=1 00:17:12.027 --rc genhtml_function_coverage=1 00:17:12.027 --rc genhtml_legend=1 00:17:12.027 --rc geninfo_all_blocks=1 00:17:12.027 --rc geninfo_unexecuted_blocks=1 00:17:12.027 00:17:12.027 ' 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:12.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.027 --rc genhtml_branch_coverage=1 00:17:12.027 --rc genhtml_function_coverage=1 00:17:12.027 --rc genhtml_legend=1 00:17:12.027 --rc geninfo_all_blocks=1 00:17:12.027 --rc geninfo_unexecuted_blocks=1 00:17:12.027 00:17:12.027 ' 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:12.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.027 --rc genhtml_branch_coverage=1 00:17:12.027 --rc genhtml_function_coverage=1 00:17:12.027 --rc genhtml_legend=1 00:17:12.027 --rc geninfo_all_blocks=1 00:17:12.027 --rc geninfo_unexecuted_blocks=1 00:17:12.027 00:17:12.027 ' 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.027 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:12.028 20:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:13.938 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:13.938 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:13.938 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:13.938 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:13.939 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:13.939 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:13.939 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:13.939 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:13.939 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.200 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.200 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.200 20:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:14.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:17:14.200 00:17:14.200 --- 10.0.0.2 ping statistics --- 00:17:14.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.200 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:17:14.200 00:17:14.200 --- 10.0.0.1 ping statistics --- 00:17:14.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.200 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=214814 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 214814 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 214814 ']' 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:14.200 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:14.200 [2024-11-18 20:18:26.135292] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:14.200 [2024-11-18 20:18:26.135365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.200 [2024-11-18 20:18:26.206323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:14.459 [2024-11-18 20:18:26.251440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.459 [2024-11-18 20:18:26.251491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.459 [2024-11-18 20:18:26.251514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.459 [2024-11-18 20:18:26.251531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.459 [2024-11-18 20:18:26.251541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.459 [2024-11-18 20:18:26.253153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.459 [2024-11-18 20:18:26.253219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.459 [2024-11-18 20:18:26.253332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.459 [2024-11-18 20:18:26.253336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.459 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.459 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:14.459 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:14.459 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:14.459 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:14.459 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.459 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:14.459 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27243 00:17:14.718 [2024-11-18 20:18:26.636275] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:14.718 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:14.718 { 00:17:14.718 "nqn": "nqn.2016-06.io.spdk:cnode27243", 00:17:14.718 "tgt_name": "foobar", 00:17:14.718 "method": "nvmf_create_subsystem", 00:17:14.718 "req_id": 1 00:17:14.718 } 00:17:14.718 Got JSON-RPC error response 00:17:14.718 response: 00:17:14.718 { 00:17:14.718 "code": -32603, 00:17:14.718 "message": "Unable to find target foobar" 00:17:14.718 }' 00:17:14.718 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:14.718 { 00:17:14.718 "nqn": "nqn.2016-06.io.spdk:cnode27243", 00:17:14.718 "tgt_name": "foobar", 00:17:14.718 "method": "nvmf_create_subsystem", 00:17:14.718 "req_id": 1 00:17:14.718 } 00:17:14.718 Got JSON-RPC error response 00:17:14.718 response: 00:17:14.718 { 00:17:14.718 "code": -32603, 00:17:14.718 "message": "Unable to find target foobar" 00:17:14.718 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:14.718 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:14.718 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1917 00:17:14.977 [2024-11-18 20:18:26.905211] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1917: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:14.977 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:14.977 { 00:17:14.977 "nqn": "nqn.2016-06.io.spdk:cnode1917", 00:17:14.977 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:14.977 "method": "nvmf_create_subsystem", 00:17:14.977 "req_id": 1 00:17:14.977 } 00:17:14.977 Got JSON-RPC error response 00:17:14.977 response: 00:17:14.977 { 00:17:14.978 "code": -32602, 00:17:14.978 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:14.978 }' 00:17:14.978 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:14.978 { 00:17:14.978 "nqn": "nqn.2016-06.io.spdk:cnode1917", 00:17:14.978 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:14.978 "method": "nvmf_create_subsystem", 00:17:14.978 "req_id": 1 00:17:14.978 } 00:17:14.978 Got JSON-RPC error response 00:17:14.978 response: 00:17:14.978 { 00:17:14.978 "code": -32602, 00:17:14.978 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:14.978 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:14.978 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:14.978 20:18:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13923 00:17:15.237 [2024-11-18 20:18:27.178142] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13923: invalid model number 'SPDK_Controller' 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:15.237 { 00:17:15.237 "nqn": "nqn.2016-06.io.spdk:cnode13923", 00:17:15.237 "model_number": "SPDK_Controller\u001f", 00:17:15.237 "method": "nvmf_create_subsystem", 00:17:15.237 "req_id": 1 00:17:15.237 } 00:17:15.237 Got JSON-RPC error response 00:17:15.237 response: 00:17:15.237 { 00:17:15.237 "code": -32602, 00:17:15.237 "message": "Invalid MN SPDK_Controller\u001f" 00:17:15.237 }' 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:15.237 { 00:17:15.237 "nqn": "nqn.2016-06.io.spdk:cnode13923", 00:17:15.237 "model_number": "SPDK_Controller\u001f", 00:17:15.237 "method": "nvmf_create_subsystem", 00:17:15.237 "req_id": 1 00:17:15.237 } 00:17:15.237 Got JSON-RPC error response 00:17:15.237 response: 00:17:15.237 { 00:17:15.237 "code": -32602, 00:17:15.237 "message": "Invalid MN SPDK_Controller\u001f" 00:17:15.237 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:15.237 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.238 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '$cuL1B;c^hClM{8LJ%^*,' 00:17:15.497 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '$cuL1B;c^hClM{8LJ%^*,' nqn.2016-06.io.spdk:cnode7198 00:17:15.757 [2024-11-18 20:18:27.519295] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7198: invalid serial number '$cuL1B;c^hClM{8LJ%^*,' 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:15.757 { 00:17:15.757 "nqn": "nqn.2016-06.io.spdk:cnode7198", 00:17:15.757 "serial_number": "$cuL1B;c^hClM{8LJ%^*,", 00:17:15.757 "method": "nvmf_create_subsystem", 00:17:15.757 "req_id": 1 00:17:15.757 } 00:17:15.757 Got JSON-RPC error response 00:17:15.757 response: 00:17:15.757 { 00:17:15.757 "code": -32602, 00:17:15.757 "message": "Invalid SN $cuL1B;c^hClM{8LJ%^*," 00:17:15.757 }' 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:15.757 { 00:17:15.757 "nqn": "nqn.2016-06.io.spdk:cnode7198", 00:17:15.757 "serial_number": "$cuL1B;c^hClM{8LJ%^*,", 00:17:15.757 "method": "nvmf_create_subsystem", 00:17:15.757 "req_id": 1 00:17:15.757 } 00:17:15.757 Got JSON-RPC error response 00:17:15.757 response: 00:17:15.757 { 00:17:15.757 "code": -32602, 00:17:15.757 "message": "Invalid SN $cuL1B;c^hClM{8LJ%^*," 00:17:15.757 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.757 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.758 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ A == \- ]] 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Amq4VW(yq[RYHt}<{@pYpt54M}TDDxmLF|jQw.6LG' 00:17:15.759 20:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Amq4VW(yq[RYHt}<{@pYpt54M}TDDxmLF|jQw.6LG' nqn.2016-06.io.spdk:cnode9832 00:17:16.018 [2024-11-18 20:18:27.996871] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9832: invalid model number 'Amq4VW(yq[RYHt}<{@pYpt54M}TDDxmLF|jQw.6LG' 00:17:16.018 20:18:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:16.018 { 00:17:16.018 "nqn": "nqn.2016-06.io.spdk:cnode9832", 00:17:16.018 "model_number": "Amq4VW(yq[RYHt}<{@pYpt54M}TDDxmLF|jQw.6LG", 00:17:16.018 "method": "nvmf_create_subsystem", 00:17:16.018 "req_id": 1 00:17:16.018 } 00:17:16.018 Got JSON-RPC error response 00:17:16.018 response: 00:17:16.018 { 00:17:16.018 "code": -32602, 00:17:16.018 "message": "Invalid MN Amq4VW(yq[RYHt}<{@pYpt54M}TDDxmLF|jQw.6LG" 00:17:16.018 }' 00:17:16.018 20:18:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:16.018 { 00:17:16.018 "nqn": "nqn.2016-06.io.spdk:cnode9832", 00:17:16.018 "model_number": "Amq4VW(yq[RYHt}<{@pYpt54M}TDDxmLF|jQw.6LG", 00:17:16.018 "method": "nvmf_create_subsystem", 00:17:16.018 "req_id": 1 00:17:16.018 } 00:17:16.018 Got JSON-RPC error response 00:17:16.018 response: 00:17:16.018 { 00:17:16.018 "code": -32602, 00:17:16.018 "message": "Invalid MN Amq4VW(yq[RYHt}<{@pYpt54M}TDDxmLF|jQw.6LG" 00:17:16.018 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:16.018 20:18:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:16.277 [2024-11-18 20:18:28.261797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.537 20:18:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:16.796 20:18:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:16.796 20:18:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:16.796 20:18:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:16.796 20:18:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:16.796 20:18:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:17.055 [2024-11-18 20:18:28.907880] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:17.055 20:18:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:17.055 { 00:17:17.055 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:17.055 "listen_address": { 00:17:17.055 "trtype": "tcp", 00:17:17.055 "traddr": "", 00:17:17.055 "trsvcid": "4421" 00:17:17.055 }, 00:17:17.055 "method": "nvmf_subsystem_remove_listener", 00:17:17.055 "req_id": 1 00:17:17.055 } 00:17:17.055 Got JSON-RPC error response 00:17:17.055 response: 00:17:17.055 { 00:17:17.055 "code": -32602, 00:17:17.055 "message": "Invalid parameters" 00:17:17.055 }' 00:17:17.055 20:18:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:17.055 { 00:17:17.055 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:17.055 "listen_address": { 00:17:17.055 "trtype": "tcp", 00:17:17.055 "traddr": "", 00:17:17.055 "trsvcid": "4421" 00:17:17.055 }, 00:17:17.055 "method": "nvmf_subsystem_remove_listener", 00:17:17.055 "req_id": 1 00:17:17.055 } 00:17:17.055 Got JSON-RPC error response 00:17:17.055 response: 00:17:17.055 { 00:17:17.055 "code": -32602, 00:17:17.055 "message": "Invalid parameters" 00:17:17.055 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:17.055 20:18:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13513 -i 0 00:17:17.314 [2024-11-18 20:18:29.188781] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13513: invalid cntlid range [0-65519] 00:17:17.314 20:18:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:17.314 { 00:17:17.314 "nqn": "nqn.2016-06.io.spdk:cnode13513", 00:17:17.314 "min_cntlid": 0, 00:17:17.314 "method": "nvmf_create_subsystem", 00:17:17.314 "req_id": 1 00:17:17.314 } 00:17:17.314 Got JSON-RPC error response 00:17:17.314 response: 00:17:17.314 { 00:17:17.314 "code": -32602, 00:17:17.314 "message": "Invalid cntlid range [0-65519]" 00:17:17.314 }' 00:17:17.314 20:18:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:17.314 { 00:17:17.314 "nqn": "nqn.2016-06.io.spdk:cnode13513", 00:17:17.314 "min_cntlid": 0, 00:17:17.314 "method": "nvmf_create_subsystem", 00:17:17.314 "req_id": 1 00:17:17.314 } 00:17:17.314 Got JSON-RPC error response 00:17:17.314 response: 00:17:17.314 { 00:17:17.314 "code": -32602, 00:17:17.314 "message": "Invalid cntlid range [0-65519]" 00:17:17.314 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:17.314 20:18:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26983 -i 65520 00:17:17.573 [2024-11-18 20:18:29.465757] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26983: invalid cntlid range [65520-65519] 00:17:17.573 20:18:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:17.573 { 00:17:17.573 "nqn": "nqn.2016-06.io.spdk:cnode26983", 00:17:17.573 "min_cntlid": 65520, 00:17:17.573 "method": "nvmf_create_subsystem", 00:17:17.573 "req_id": 1 00:17:17.573 } 00:17:17.573 Got JSON-RPC error response 00:17:17.573 response: 00:17:17.573 { 00:17:17.573 "code": -32602, 00:17:17.573 "message": "Invalid cntlid range [65520-65519]" 00:17:17.573 }' 00:17:17.573 20:18:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:17.573 { 00:17:17.573 "nqn": "nqn.2016-06.io.spdk:cnode26983", 00:17:17.573 "min_cntlid": 65520, 00:17:17.573 "method": "nvmf_create_subsystem", 00:17:17.573 "req_id": 1 00:17:17.573 } 00:17:17.573 Got JSON-RPC error response 00:17:17.573 response: 00:17:17.573 { 00:17:17.573 "code": -32602, 00:17:17.573 "message": "Invalid cntlid range [65520-65519]" 00:17:17.573 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:17.573 20:18:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6238 -I 0 00:17:17.831 [2024-11-18 20:18:29.734574] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6238: invalid cntlid range [1-0] 00:17:17.831 20:18:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:17.831 { 00:17:17.831 "nqn": "nqn.2016-06.io.spdk:cnode6238", 00:17:17.831 "max_cntlid": 0, 00:17:17.831 "method": "nvmf_create_subsystem", 00:17:17.831 "req_id": 1 00:17:17.831 } 00:17:17.831 Got JSON-RPC error response 00:17:17.831 response: 00:17:17.831 { 00:17:17.831 "code": -32602, 00:17:17.831 "message": "Invalid cntlid range [1-0]" 00:17:17.831 }' 00:17:17.831 20:18:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:17.831 { 00:17:17.831 "nqn": "nqn.2016-06.io.spdk:cnode6238", 00:17:17.831 "max_cntlid": 0, 00:17:17.831 "method": "nvmf_create_subsystem", 00:17:17.831 "req_id": 1 00:17:17.831 } 00:17:17.831 Got JSON-RPC error response 00:17:17.831 response: 00:17:17.831 { 00:17:17.831 "code": -32602, 00:17:17.831 "message": "Invalid cntlid range [1-0]" 00:17:17.831 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:17.831 20:18:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6257 -I 65520 00:17:18.089 [2024-11-18 20:18:29.995457] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6257: invalid cntlid range [1-65520] 00:17:18.089 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:18.089 { 00:17:18.089 "nqn": "nqn.2016-06.io.spdk:cnode6257", 00:17:18.089 "max_cntlid": 65520, 00:17:18.089 "method": "nvmf_create_subsystem", 00:17:18.089 "req_id": 1 00:17:18.089 } 00:17:18.089 Got JSON-RPC error response 00:17:18.089 response: 00:17:18.089 { 00:17:18.089 "code": -32602, 00:17:18.089 "message": "Invalid cntlid range [1-65520]" 00:17:18.089 }' 00:17:18.089 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:18.089 { 00:17:18.089 "nqn": "nqn.2016-06.io.spdk:cnode6257", 00:17:18.089 "max_cntlid": 65520, 00:17:18.089 "method": "nvmf_create_subsystem", 00:17:18.090 "req_id": 1 00:17:18.090 } 00:17:18.090 Got JSON-RPC error response 00:17:18.090 response: 00:17:18.090 { 00:17:18.090 "code": -32602, 00:17:18.090 "message": "Invalid cntlid range [1-65520]" 00:17:18.090 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:18.090 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21964 -i 6 -I 5 00:17:18.348 [2024-11-18 20:18:30.268413] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21964: invalid cntlid range [6-5] 00:17:18.348 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:18.348 { 00:17:18.348 "nqn": "nqn.2016-06.io.spdk:cnode21964", 00:17:18.348 "min_cntlid": 6, 00:17:18.348 "max_cntlid": 5, 00:17:18.348 "method": "nvmf_create_subsystem", 00:17:18.348 "req_id": 1 00:17:18.348 } 00:17:18.348 Got JSON-RPC error response 00:17:18.348 response: 00:17:18.348 { 00:17:18.348 "code": -32602, 00:17:18.348 "message": "Invalid cntlid range [6-5]" 00:17:18.348 }' 00:17:18.348 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:18.348 { 00:17:18.348 "nqn": "nqn.2016-06.io.spdk:cnode21964", 00:17:18.348 "min_cntlid": 6, 00:17:18.348 "max_cntlid": 5, 00:17:18.348 "method": "nvmf_create_subsystem", 00:17:18.348 "req_id": 1 00:17:18.348 } 00:17:18.348 Got JSON-RPC error response 00:17:18.348 response: 00:17:18.348 { 00:17:18.348 "code": -32602, 00:17:18.348 "message": "Invalid cntlid range [6-5]" 00:17:18.348 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:18.348 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:18.607 { 00:17:18.607 "name": "foobar", 00:17:18.607 "method": "nvmf_delete_target", 00:17:18.607 "req_id": 1 00:17:18.607 } 00:17:18.607 Got JSON-RPC error response 00:17:18.607 response: 00:17:18.607 { 00:17:18.607 "code": -32602, 00:17:18.607 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:18.607 }' 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:18.607 { 00:17:18.607 "name": "foobar", 00:17:18.607 "method": "nvmf_delete_target", 00:17:18.607 "req_id": 1 00:17:18.607 } 00:17:18.607 Got JSON-RPC error response 00:17:18.607 response: 00:17:18.607 { 00:17:18.607 "code": -32602, 00:17:18.607 "message": "The specified target doesn't exist, cannot delete it." 00:17:18.607 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:18.607 rmmod nvme_tcp 00:17:18.607 rmmod nvme_fabrics 00:17:18.607 rmmod nvme_keyring 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 214814 ']' 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 214814 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 214814 ']' 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 214814 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 214814 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 214814' 00:17:18.607 killing process with pid 214814 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 214814 00:17:18.607 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 214814 00:17:18.879 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:18.879 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:18.879 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:18.879 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:18.879 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:18.879 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:18.879 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:18.879 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:18.879 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:18.879 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.879 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.879 20:18:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.873 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:20.873 00:17:20.873 real 0m9.196s 00:17:20.873 user 0m21.906s 00:17:20.873 sys 0m2.603s 00:17:20.873 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.873 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:20.873 ************************************ 00:17:20.873 END TEST nvmf_invalid 00:17:20.873 ************************************ 00:17:20.873 20:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:20.873 20:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:20.873 20:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.873 20:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:20.873 ************************************ 00:17:20.873 START TEST nvmf_connect_stress 00:17:20.873 ************************************ 00:17:20.873 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:20.873 * Looking for test storage... 00:17:20.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:20.873 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:20.873 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:20.873 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:21.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.151 --rc genhtml_branch_coverage=1 00:17:21.151 --rc genhtml_function_coverage=1 00:17:21.151 --rc genhtml_legend=1 00:17:21.151 --rc geninfo_all_blocks=1 00:17:21.151 --rc geninfo_unexecuted_blocks=1 00:17:21.151 00:17:21.151 ' 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:21.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.151 --rc genhtml_branch_coverage=1 00:17:21.151 --rc genhtml_function_coverage=1 00:17:21.151 --rc genhtml_legend=1 00:17:21.151 --rc geninfo_all_blocks=1 00:17:21.151 --rc geninfo_unexecuted_blocks=1 00:17:21.151 00:17:21.151 ' 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:21.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.151 --rc genhtml_branch_coverage=1 00:17:21.151 --rc genhtml_function_coverage=1 00:17:21.151 --rc genhtml_legend=1 00:17:21.151 --rc geninfo_all_blocks=1 00:17:21.151 --rc geninfo_unexecuted_blocks=1 00:17:21.151 00:17:21.151 ' 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:21.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.151 --rc genhtml_branch_coverage=1 00:17:21.151 --rc genhtml_function_coverage=1 00:17:21.151 --rc genhtml_legend=1 00:17:21.151 --rc geninfo_all_blocks=1 00:17:21.151 --rc geninfo_unexecuted_blocks=1 00:17:21.151 00:17:21.151 ' 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.151 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:21.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:21.152 20:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:23.181 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:23.181 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:23.181 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:23.181 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:23.181 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:23.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:17:23.480 00:17:23.480 --- 10.0.0.2 ping statistics --- 00:17:23.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.480 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:23.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:17:23.480 00:17:23.480 --- 10.0.0.1 ping statistics --- 00:17:23.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.480 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=217548 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 217548 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 217548 ']' 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.480 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.480 [2024-11-18 20:18:35.309717] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:23.480 [2024-11-18 20:18:35.309810] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.480 [2024-11-18 20:18:35.380985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:23.480 [2024-11-18 20:18:35.426547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.480 [2024-11-18 20:18:35.426592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.480 [2024-11-18 20:18:35.426614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.480 [2024-11-18 20:18:35.426647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.480 [2024-11-18 20:18:35.426658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.480 [2024-11-18 20:18:35.428133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.480 [2024-11-18 20:18:35.428270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:23.480 [2024-11-18 20:18:35.428273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.754 [2024-11-18 20:18:35.573146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.754 [2024-11-18 20:18:35.590359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.754 NULL1 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=217573 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.754 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.033 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.033 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:24.033 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.033 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.033 20:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.314 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.314 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:24.314 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.314 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.314 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.929 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.929 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:24.929 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.929 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.929 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.214 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.214 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:25.214 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.214 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.214 20:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.490 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.490 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:25.490 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.490 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.490 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.768 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.768 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:25.768 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.768 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.768 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.054 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.054 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:26.054 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.054 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.054 20:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.341 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.341 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:26.341 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.341 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.341 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.616 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.616 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:26.616 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.616 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.616 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.896 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.896 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:26.896 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.896 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.896 20:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.471 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.471 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:27.471 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.471 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.471 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.729 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.729 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:27.729 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.729 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.729 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.987 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.987 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:27.987 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.987 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.987 20:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.247 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.247 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:28.247 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.247 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.247 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.508 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.508 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:28.508 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.508 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.508 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.076 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.076 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:29.076 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.076 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.076 20:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.333 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.333 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:29.333 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.333 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.333 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.593 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.593 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:29.593 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.593 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.593 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.853 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.853 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:29.853 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.853 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.853 20:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.113 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.113 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:30.113 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.113 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.113 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.682 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.682 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:30.682 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.682 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.682 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.940 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.940 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:30.940 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.940 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.940 20:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.199 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.199 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:31.199 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.199 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.199 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.459 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.459 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:31.459 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.459 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.459 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.720 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.720 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:31.720 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.720 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.720 20:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.288 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.288 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:32.288 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.288 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.288 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.546 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.546 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:32.546 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.546 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.546 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.805 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.805 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:32.805 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.805 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.805 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.066 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.066 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:33.066 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.066 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.066 20:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.341 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.341 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:33.341 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.341 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.341 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.907 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.907 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:33.907 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.907 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.907 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.907 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:34.167 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.167 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217573 00:17:34.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (217573) - No such process 00:17:34.167 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 217573 00:17:34.167 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:34.167 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:34.167 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:34.167 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:34.167 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:34.167 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:34.167 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:34.167 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:34.167 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:34.167 rmmod nvme_tcp 00:17:34.167 rmmod nvme_fabrics 00:17:34.167 rmmod nvme_keyring 00:17:34.168 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.168 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:34.168 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:34.168 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 217548 ']' 00:17:34.168 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 217548 00:17:34.168 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 217548 ']' 00:17:34.168 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 217548 00:17:34.168 20:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:34.168 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.168 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217548 00:17:34.168 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:34.168 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:34.168 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217548' 00:17:34.168 killing process with pid 217548 00:17:34.168 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 217548 00:17:34.168 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 217548 00:17:34.428 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:34.428 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:34.428 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:34.428 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:34.428 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:34.428 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:34.428 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:34.428 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:34.428 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:34.428 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.428 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.428 20:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.342 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:36.342 00:17:36.342 real 0m15.489s 00:17:36.342 user 0m40.043s 00:17:36.342 sys 0m4.674s 00:17:36.342 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.342 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.342 ************************************ 00:17:36.342 END TEST nvmf_connect_stress 00:17:36.342 ************************************ 00:17:36.342 20:18:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:36.342 20:18:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:36.342 20:18:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.342 20:18:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:36.342 ************************************ 00:17:36.342 START TEST nvmf_fused_ordering 00:17:36.342 ************************************ 00:17:36.342 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:36.601 * Looking for test storage... 00:17:36.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:36.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.601 --rc genhtml_branch_coverage=1 00:17:36.601 --rc genhtml_function_coverage=1 00:17:36.601 --rc genhtml_legend=1 00:17:36.601 --rc geninfo_all_blocks=1 00:17:36.601 --rc geninfo_unexecuted_blocks=1 00:17:36.601 00:17:36.601 ' 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:36.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.601 --rc genhtml_branch_coverage=1 00:17:36.601 --rc genhtml_function_coverage=1 00:17:36.601 --rc genhtml_legend=1 00:17:36.601 --rc geninfo_all_blocks=1 00:17:36.601 --rc geninfo_unexecuted_blocks=1 00:17:36.601 00:17:36.601 ' 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:36.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.601 --rc genhtml_branch_coverage=1 00:17:36.601 --rc genhtml_function_coverage=1 00:17:36.601 --rc genhtml_legend=1 00:17:36.601 --rc geninfo_all_blocks=1 00:17:36.601 --rc geninfo_unexecuted_blocks=1 00:17:36.601 00:17:36.601 ' 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:36.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.601 --rc genhtml_branch_coverage=1 00:17:36.601 --rc genhtml_function_coverage=1 00:17:36.601 --rc genhtml_legend=1 00:17:36.601 --rc geninfo_all_blocks=1 00:17:36.601 --rc geninfo_unexecuted_blocks=1 00:17:36.601 00:17:36.601 ' 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.601 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:36.602 20:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:39.144 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:39.145 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:39.145 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:39.145 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:39.145 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:39.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:17:39.145 00:17:39.145 --- 10.0.0.2 ping statistics --- 00:17:39.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.145 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:17:39.145 00:17:39.145 --- 10.0.0.1 ping statistics --- 00:17:39.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.145 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:39.145 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.146 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:39.146 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=220743 00:17:39.146 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 220743 00:17:39.146 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:39.146 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 220743 ']' 00:17:39.146 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.146 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.146 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.146 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.146 20:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:39.146 [2024-11-18 20:18:50.801317] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:39.146 [2024-11-18 20:18:50.801402] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.146 [2024-11-18 20:18:50.873609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.146 [2024-11-18 20:18:50.920707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.146 [2024-11-18 20:18:50.920756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.146 [2024-11-18 20:18:50.920769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.146 [2024-11-18 20:18:50.920781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.146 [2024-11-18 20:18:50.920790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.146 [2024-11-18 20:18:50.921323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:39.146 [2024-11-18 20:18:51.070714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:39.146 [2024-11-18 20:18:51.086920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:39.146 NULL1 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.146 20:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:39.146 [2024-11-18 20:18:51.129910] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:39.146 [2024-11-18 20:18:51.129959] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220883 ] 00:17:39.717 Attached to nqn.2016-06.io.spdk:cnode1 00:17:39.717 Namespace ID: 1 size: 1GB 00:17:39.717 fused_ordering(0) 00:17:39.717 fused_ordering(1) 00:17:39.717 fused_ordering(2) 00:17:39.717 fused_ordering(3) 00:17:39.717 fused_ordering(4) 00:17:39.717 fused_ordering(5) 00:17:39.717 fused_ordering(6) 00:17:39.717 fused_ordering(7) 00:17:39.717 fused_ordering(8) 00:17:39.717 fused_ordering(9) 00:17:39.717 fused_ordering(10) 00:17:39.717 fused_ordering(11) 00:17:39.717 fused_ordering(12) 00:17:39.717 fused_ordering(13) 00:17:39.717 fused_ordering(14) 00:17:39.717 fused_ordering(15) 00:17:39.717 fused_ordering(16) 00:17:39.717 fused_ordering(17) 00:17:39.717 fused_ordering(18) 00:17:39.717 fused_ordering(19) 00:17:39.717 fused_ordering(20) 00:17:39.717 fused_ordering(21) 00:17:39.717 fused_ordering(22) 00:17:39.717 fused_ordering(23) 00:17:39.717 fused_ordering(24) 00:17:39.717 fused_ordering(25) 00:17:39.717 fused_ordering(26) 00:17:39.717 fused_ordering(27) 00:17:39.717 fused_ordering(28) 00:17:39.717 fused_ordering(29) 00:17:39.717 fused_ordering(30) 00:17:39.717 fused_ordering(31) 00:17:39.717 fused_ordering(32) 00:17:39.717 fused_ordering(33) 00:17:39.717 fused_ordering(34) 00:17:39.717 fused_ordering(35) 00:17:39.717 fused_ordering(36) 00:17:39.717 fused_ordering(37) 00:17:39.717 fused_ordering(38) 00:17:39.717 fused_ordering(39) 00:17:39.717 fused_ordering(40) 00:17:39.717 fused_ordering(41) 00:17:39.717 fused_ordering(42) 00:17:39.717 fused_ordering(43) 00:17:39.717 fused_ordering(44) 00:17:39.717 fused_ordering(45) 00:17:39.717 fused_ordering(46) 00:17:39.717 fused_ordering(47) 00:17:39.717 fused_ordering(48) 00:17:39.717 fused_ordering(49) 00:17:39.717 fused_ordering(50) 00:17:39.717 fused_ordering(51) 00:17:39.717 fused_ordering(52) 00:17:39.717 fused_ordering(53) 00:17:39.717 fused_ordering(54) 00:17:39.717 fused_ordering(55) 00:17:39.717 fused_ordering(56) 00:17:39.717 fused_ordering(57) 00:17:39.717 fused_ordering(58) 00:17:39.717 fused_ordering(59) 00:17:39.717 fused_ordering(60) 00:17:39.717 fused_ordering(61) 00:17:39.717 fused_ordering(62) 00:17:39.717 fused_ordering(63) 00:17:39.717 fused_ordering(64) 00:17:39.717 fused_ordering(65) 00:17:39.717 fused_ordering(66) 00:17:39.717 fused_ordering(67) 00:17:39.717 fused_ordering(68) 00:17:39.717 fused_ordering(69) 00:17:39.717 fused_ordering(70) 00:17:39.717 fused_ordering(71) 00:17:39.717 fused_ordering(72) 00:17:39.717 fused_ordering(73) 00:17:39.717 fused_ordering(74) 00:17:39.717 fused_ordering(75) 00:17:39.717 fused_ordering(76) 00:17:39.717 fused_ordering(77) 00:17:39.717 fused_ordering(78) 00:17:39.717 fused_ordering(79) 00:17:39.717 fused_ordering(80) 00:17:39.717 fused_ordering(81) 00:17:39.717 fused_ordering(82) 00:17:39.717 fused_ordering(83) 00:17:39.717 fused_ordering(84) 00:17:39.717 fused_ordering(85) 00:17:39.717 fused_ordering(86) 00:17:39.717 fused_ordering(87) 00:17:39.717 fused_ordering(88) 00:17:39.717 fused_ordering(89) 00:17:39.717 fused_ordering(90) 00:17:39.717 fused_ordering(91) 00:17:39.717 fused_ordering(92) 00:17:39.717 fused_ordering(93) 00:17:39.717 fused_ordering(94) 00:17:39.717 fused_ordering(95) 00:17:39.717 fused_ordering(96) 00:17:39.717 fused_ordering(97) 00:17:39.717 fused_ordering(98) 00:17:39.717 fused_ordering(99) 00:17:39.717 fused_ordering(100) 00:17:39.717 fused_ordering(101) 00:17:39.717 fused_ordering(102) 00:17:39.717 fused_ordering(103) 00:17:39.717 fused_ordering(104) 00:17:39.717 fused_ordering(105) 00:17:39.717 fused_ordering(106) 00:17:39.717 fused_ordering(107) 00:17:39.717 fused_ordering(108) 00:17:39.717 fused_ordering(109) 00:17:39.717 fused_ordering(110) 00:17:39.717 fused_ordering(111) 00:17:39.717 fused_ordering(112) 00:17:39.717 fused_ordering(113) 00:17:39.717 fused_ordering(114) 00:17:39.717 fused_ordering(115) 00:17:39.717 fused_ordering(116) 00:17:39.717 fused_ordering(117) 00:17:39.717 fused_ordering(118) 00:17:39.717 fused_ordering(119) 00:17:39.717 fused_ordering(120) 00:17:39.717 fused_ordering(121) 00:17:39.717 fused_ordering(122) 00:17:39.717 fused_ordering(123) 00:17:39.717 fused_ordering(124) 00:17:39.717 fused_ordering(125) 00:17:39.717 fused_ordering(126) 00:17:39.717 fused_ordering(127) 00:17:39.717 fused_ordering(128) 00:17:39.717 fused_ordering(129) 00:17:39.717 fused_ordering(130) 00:17:39.717 fused_ordering(131) 00:17:39.717 fused_ordering(132) 00:17:39.717 fused_ordering(133) 00:17:39.717 fused_ordering(134) 00:17:39.717 fused_ordering(135) 00:17:39.717 fused_ordering(136) 00:17:39.717 fused_ordering(137) 00:17:39.717 fused_ordering(138) 00:17:39.717 fused_ordering(139) 00:17:39.717 fused_ordering(140) 00:17:39.717 fused_ordering(141) 00:17:39.717 fused_ordering(142) 00:17:39.717 fused_ordering(143) 00:17:39.717 fused_ordering(144) 00:17:39.717 fused_ordering(145) 00:17:39.717 fused_ordering(146) 00:17:39.717 fused_ordering(147) 00:17:39.717 fused_ordering(148) 00:17:39.717 fused_ordering(149) 00:17:39.717 fused_ordering(150) 00:17:39.717 fused_ordering(151) 00:17:39.717 fused_ordering(152) 00:17:39.717 fused_ordering(153) 00:17:39.717 fused_ordering(154) 00:17:39.717 fused_ordering(155) 00:17:39.717 fused_ordering(156) 00:17:39.717 fused_ordering(157) 00:17:39.717 fused_ordering(158) 00:17:39.717 fused_ordering(159) 00:17:39.717 fused_ordering(160) 00:17:39.717 fused_ordering(161) 00:17:39.717 fused_ordering(162) 00:17:39.717 fused_ordering(163) 00:17:39.717 fused_ordering(164) 00:17:39.717 fused_ordering(165) 00:17:39.717 fused_ordering(166) 00:17:39.717 fused_ordering(167) 00:17:39.717 fused_ordering(168) 00:17:39.717 fused_ordering(169) 00:17:39.717 fused_ordering(170) 00:17:39.717 fused_ordering(171) 00:17:39.717 fused_ordering(172) 00:17:39.717 fused_ordering(173) 00:17:39.717 fused_ordering(174) 00:17:39.717 fused_ordering(175) 00:17:39.717 fused_ordering(176) 00:17:39.717 fused_ordering(177) 00:17:39.717 fused_ordering(178) 00:17:39.717 fused_ordering(179) 00:17:39.717 fused_ordering(180) 00:17:39.717 fused_ordering(181) 00:17:39.717 fused_ordering(182) 00:17:39.717 fused_ordering(183) 00:17:39.717 fused_ordering(184) 00:17:39.717 fused_ordering(185) 00:17:39.717 fused_ordering(186) 00:17:39.717 fused_ordering(187) 00:17:39.717 fused_ordering(188) 00:17:39.717 fused_ordering(189) 00:17:39.717 fused_ordering(190) 00:17:39.717 fused_ordering(191) 00:17:39.717 fused_ordering(192) 00:17:39.717 fused_ordering(193) 00:17:39.717 fused_ordering(194) 00:17:39.717 fused_ordering(195) 00:17:39.717 fused_ordering(196) 00:17:39.717 fused_ordering(197) 00:17:39.717 fused_ordering(198) 00:17:39.717 fused_ordering(199) 00:17:39.717 fused_ordering(200) 00:17:39.717 fused_ordering(201) 00:17:39.717 fused_ordering(202) 00:17:39.717 fused_ordering(203) 00:17:39.717 fused_ordering(204) 00:17:39.717 fused_ordering(205) 00:17:39.978 fused_ordering(206) 00:17:39.978 fused_ordering(207) 00:17:39.978 fused_ordering(208) 00:17:39.978 fused_ordering(209) 00:17:39.978 fused_ordering(210) 00:17:39.978 fused_ordering(211) 00:17:39.978 fused_ordering(212) 00:17:39.978 fused_ordering(213) 00:17:39.978 fused_ordering(214) 00:17:39.978 fused_ordering(215) 00:17:39.978 fused_ordering(216) 00:17:39.978 fused_ordering(217) 00:17:39.978 fused_ordering(218) 00:17:39.978 fused_ordering(219) 00:17:39.978 fused_ordering(220) 00:17:39.978 fused_ordering(221) 00:17:39.978 fused_ordering(222) 00:17:39.978 fused_ordering(223) 00:17:39.978 fused_ordering(224) 00:17:39.978 fused_ordering(225) 00:17:39.978 fused_ordering(226) 00:17:39.978 fused_ordering(227) 00:17:39.978 fused_ordering(228) 00:17:39.978 fused_ordering(229) 00:17:39.978 fused_ordering(230) 00:17:39.978 fused_ordering(231) 00:17:39.978 fused_ordering(232) 00:17:39.978 fused_ordering(233) 00:17:39.978 fused_ordering(234) 00:17:39.978 fused_ordering(235) 00:17:39.978 fused_ordering(236) 00:17:39.978 fused_ordering(237) 00:17:39.978 fused_ordering(238) 00:17:39.978 fused_ordering(239) 00:17:39.978 fused_ordering(240) 00:17:39.978 fused_ordering(241) 00:17:39.978 fused_ordering(242) 00:17:39.978 fused_ordering(243) 00:17:39.978 fused_ordering(244) 00:17:39.978 fused_ordering(245) 00:17:39.978 fused_ordering(246) 00:17:39.978 fused_ordering(247) 00:17:39.978 fused_ordering(248) 00:17:39.978 fused_ordering(249) 00:17:39.978 fused_ordering(250) 00:17:39.978 fused_ordering(251) 00:17:39.978 fused_ordering(252) 00:17:39.978 fused_ordering(253) 00:17:39.978 fused_ordering(254) 00:17:39.978 fused_ordering(255) 00:17:39.978 fused_ordering(256) 00:17:39.978 fused_ordering(257) 00:17:39.978 fused_ordering(258) 00:17:39.978 fused_ordering(259) 00:17:39.978 fused_ordering(260) 00:17:39.978 fused_ordering(261) 00:17:39.978 fused_ordering(262) 00:17:39.978 fused_ordering(263) 00:17:39.978 fused_ordering(264) 00:17:39.978 fused_ordering(265) 00:17:39.978 fused_ordering(266) 00:17:39.978 fused_ordering(267) 00:17:39.978 fused_ordering(268) 00:17:39.978 fused_ordering(269) 00:17:39.978 fused_ordering(270) 00:17:39.978 fused_ordering(271) 00:17:39.978 fused_ordering(272) 00:17:39.978 fused_ordering(273) 00:17:39.978 fused_ordering(274) 00:17:39.978 fused_ordering(275) 00:17:39.978 fused_ordering(276) 00:17:39.978 fused_ordering(277) 00:17:39.978 fused_ordering(278) 00:17:39.978 fused_ordering(279) 00:17:39.978 fused_ordering(280) 00:17:39.978 fused_ordering(281) 00:17:39.978 fused_ordering(282) 00:17:39.978 fused_ordering(283) 00:17:39.978 fused_ordering(284) 00:17:39.978 fused_ordering(285) 00:17:39.978 fused_ordering(286) 00:17:39.978 fused_ordering(287) 00:17:39.978 fused_ordering(288) 00:17:39.978 fused_ordering(289) 00:17:39.978 fused_ordering(290) 00:17:39.978 fused_ordering(291) 00:17:39.978 fused_ordering(292) 00:17:39.978 fused_ordering(293) 00:17:39.978 fused_ordering(294) 00:17:39.978 fused_ordering(295) 00:17:39.978 fused_ordering(296) 00:17:39.978 fused_ordering(297) 00:17:39.978 fused_ordering(298) 00:17:39.978 fused_ordering(299) 00:17:39.978 fused_ordering(300) 00:17:39.978 fused_ordering(301) 00:17:39.978 fused_ordering(302) 00:17:39.978 fused_ordering(303) 00:17:39.978 fused_ordering(304) 00:17:39.978 fused_ordering(305) 00:17:39.978 fused_ordering(306) 00:17:39.978 fused_ordering(307) 00:17:39.978 fused_ordering(308) 00:17:39.978 fused_ordering(309) 00:17:39.978 fused_ordering(310) 00:17:39.978 fused_ordering(311) 00:17:39.978 fused_ordering(312) 00:17:39.978 fused_ordering(313) 00:17:39.978 fused_ordering(314) 00:17:39.978 fused_ordering(315) 00:17:39.978 fused_ordering(316) 00:17:39.978 fused_ordering(317) 00:17:39.978 fused_ordering(318) 00:17:39.978 fused_ordering(319) 00:17:39.978 fused_ordering(320) 00:17:39.978 fused_ordering(321) 00:17:39.978 fused_ordering(322) 00:17:39.978 fused_ordering(323) 00:17:39.978 fused_ordering(324) 00:17:39.978 fused_ordering(325) 00:17:39.978 fused_ordering(326) 00:17:39.978 fused_ordering(327) 00:17:39.978 fused_ordering(328) 00:17:39.978 fused_ordering(329) 00:17:39.978 fused_ordering(330) 00:17:39.978 fused_ordering(331) 00:17:39.978 fused_ordering(332) 00:17:39.978 fused_ordering(333) 00:17:39.978 fused_ordering(334) 00:17:39.978 fused_ordering(335) 00:17:39.978 fused_ordering(336) 00:17:39.978 fused_ordering(337) 00:17:39.978 fused_ordering(338) 00:17:39.978 fused_ordering(339) 00:17:39.978 fused_ordering(340) 00:17:39.978 fused_ordering(341) 00:17:39.978 fused_ordering(342) 00:17:39.978 fused_ordering(343) 00:17:39.978 fused_ordering(344) 00:17:39.978 fused_ordering(345) 00:17:39.978 fused_ordering(346) 00:17:39.978 fused_ordering(347) 00:17:39.978 fused_ordering(348) 00:17:39.978 fused_ordering(349) 00:17:39.978 fused_ordering(350) 00:17:39.978 fused_ordering(351) 00:17:39.978 fused_ordering(352) 00:17:39.978 fused_ordering(353) 00:17:39.978 fused_ordering(354) 00:17:39.978 fused_ordering(355) 00:17:39.978 fused_ordering(356) 00:17:39.978 fused_ordering(357) 00:17:39.978 fused_ordering(358) 00:17:39.978 fused_ordering(359) 00:17:39.978 fused_ordering(360) 00:17:39.978 fused_ordering(361) 00:17:39.978 fused_ordering(362) 00:17:39.978 fused_ordering(363) 00:17:39.978 fused_ordering(364) 00:17:39.978 fused_ordering(365) 00:17:39.978 fused_ordering(366) 00:17:39.978 fused_ordering(367) 00:17:39.978 fused_ordering(368) 00:17:39.978 fused_ordering(369) 00:17:39.978 fused_ordering(370) 00:17:39.978 fused_ordering(371) 00:17:39.978 fused_ordering(372) 00:17:39.978 fused_ordering(373) 00:17:39.978 fused_ordering(374) 00:17:39.978 fused_ordering(375) 00:17:39.978 fused_ordering(376) 00:17:39.978 fused_ordering(377) 00:17:39.978 fused_ordering(378) 00:17:39.978 fused_ordering(379) 00:17:39.978 fused_ordering(380) 00:17:39.978 fused_ordering(381) 00:17:39.978 fused_ordering(382) 00:17:39.978 fused_ordering(383) 00:17:39.978 fused_ordering(384) 00:17:39.978 fused_ordering(385) 00:17:39.978 fused_ordering(386) 00:17:39.978 fused_ordering(387) 00:17:39.978 fused_ordering(388) 00:17:39.978 fused_ordering(389) 00:17:39.978 fused_ordering(390) 00:17:39.978 fused_ordering(391) 00:17:39.978 fused_ordering(392) 00:17:39.978 fused_ordering(393) 00:17:39.978 fused_ordering(394) 00:17:39.978 fused_ordering(395) 00:17:39.978 fused_ordering(396) 00:17:39.978 fused_ordering(397) 00:17:39.978 fused_ordering(398) 00:17:39.978 fused_ordering(399) 00:17:39.978 fused_ordering(400) 00:17:39.978 fused_ordering(401) 00:17:39.978 fused_ordering(402) 00:17:39.978 fused_ordering(403) 00:17:39.979 fused_ordering(404) 00:17:39.979 fused_ordering(405) 00:17:39.979 fused_ordering(406) 00:17:39.979 fused_ordering(407) 00:17:39.979 fused_ordering(408) 00:17:39.979 fused_ordering(409) 00:17:39.979 fused_ordering(410) 00:17:40.550 fused_ordering(411) 00:17:40.550 fused_ordering(412) 00:17:40.550 fused_ordering(413) 00:17:40.550 fused_ordering(414) 00:17:40.550 fused_ordering(415) 00:17:40.550 fused_ordering(416) 00:17:40.550 fused_ordering(417) 00:17:40.550 fused_ordering(418) 00:17:40.550 fused_ordering(419) 00:17:40.550 fused_ordering(420) 00:17:40.550 fused_ordering(421) 00:17:40.550 fused_ordering(422) 00:17:40.550 fused_ordering(423) 00:17:40.550 fused_ordering(424) 00:17:40.550 fused_ordering(425) 00:17:40.550 fused_ordering(426) 00:17:40.550 fused_ordering(427) 00:17:40.550 fused_ordering(428) 00:17:40.550 fused_ordering(429) 00:17:40.550 fused_ordering(430) 00:17:40.550 fused_ordering(431) 00:17:40.550 fused_ordering(432) 00:17:40.550 fused_ordering(433) 00:17:40.550 fused_ordering(434) 00:17:40.550 fused_ordering(435) 00:17:40.550 fused_ordering(436) 00:17:40.550 fused_ordering(437) 00:17:40.550 fused_ordering(438) 00:17:40.550 fused_ordering(439) 00:17:40.550 fused_ordering(440) 00:17:40.550 fused_ordering(441) 00:17:40.550 fused_ordering(442) 00:17:40.550 fused_ordering(443) 00:17:40.550 fused_ordering(444) 00:17:40.550 fused_ordering(445) 00:17:40.550 fused_ordering(446) 00:17:40.550 fused_ordering(447) 00:17:40.550 fused_ordering(448) 00:17:40.550 fused_ordering(449) 00:17:40.550 fused_ordering(450) 00:17:40.550 fused_ordering(451) 00:17:40.550 fused_ordering(452) 00:17:40.550 fused_ordering(453) 00:17:40.550 fused_ordering(454) 00:17:40.550 fused_ordering(455) 00:17:40.550 fused_ordering(456) 00:17:40.550 fused_ordering(457) 00:17:40.550 fused_ordering(458) 00:17:40.550 fused_ordering(459) 00:17:40.550 fused_ordering(460) 00:17:40.550 fused_ordering(461) 00:17:40.550 fused_ordering(462) 00:17:40.550 fused_ordering(463) 00:17:40.550 fused_ordering(464) 00:17:40.550 fused_ordering(465) 00:17:40.550 fused_ordering(466) 00:17:40.550 fused_ordering(467) 00:17:40.550 fused_ordering(468) 00:17:40.550 fused_ordering(469) 00:17:40.550 fused_ordering(470) 00:17:40.550 fused_ordering(471) 00:17:40.550 fused_ordering(472) 00:17:40.550 fused_ordering(473) 00:17:40.550 fused_ordering(474) 00:17:40.550 fused_ordering(475) 00:17:40.550 fused_ordering(476) 00:17:40.550 fused_ordering(477) 00:17:40.550 fused_ordering(478) 00:17:40.550 fused_ordering(479) 00:17:40.550 fused_ordering(480) 00:17:40.550 fused_ordering(481) 00:17:40.550 fused_ordering(482) 00:17:40.550 fused_ordering(483) 00:17:40.550 fused_ordering(484) 00:17:40.550 fused_ordering(485) 00:17:40.550 fused_ordering(486) 00:17:40.550 fused_ordering(487) 00:17:40.550 fused_ordering(488) 00:17:40.550 fused_ordering(489) 00:17:40.550 fused_ordering(490) 00:17:40.550 fused_ordering(491) 00:17:40.550 fused_ordering(492) 00:17:40.550 fused_ordering(493) 00:17:40.550 fused_ordering(494) 00:17:40.550 fused_ordering(495) 00:17:40.550 fused_ordering(496) 00:17:40.550 fused_ordering(497) 00:17:40.550 fused_ordering(498) 00:17:40.550 fused_ordering(499) 00:17:40.550 fused_ordering(500) 00:17:40.550 fused_ordering(501) 00:17:40.550 fused_ordering(502) 00:17:40.550 fused_ordering(503) 00:17:40.550 fused_ordering(504) 00:17:40.550 fused_ordering(505) 00:17:40.550 fused_ordering(506) 00:17:40.550 fused_ordering(507) 00:17:40.550 fused_ordering(508) 00:17:40.550 fused_ordering(509) 00:17:40.550 fused_ordering(510) 00:17:40.550 fused_ordering(511) 00:17:40.550 fused_ordering(512) 00:17:40.550 fused_ordering(513) 00:17:40.550 fused_ordering(514) 00:17:40.550 fused_ordering(515) 00:17:40.550 fused_ordering(516) 00:17:40.550 fused_ordering(517) 00:17:40.550 fused_ordering(518) 00:17:40.550 fused_ordering(519) 00:17:40.550 fused_ordering(520) 00:17:40.550 fused_ordering(521) 00:17:40.550 fused_ordering(522) 00:17:40.550 fused_ordering(523) 00:17:40.550 fused_ordering(524) 00:17:40.550 fused_ordering(525) 00:17:40.550 fused_ordering(526) 00:17:40.550 fused_ordering(527) 00:17:40.550 fused_ordering(528) 00:17:40.550 fused_ordering(529) 00:17:40.550 fused_ordering(530) 00:17:40.550 fused_ordering(531) 00:17:40.550 fused_ordering(532) 00:17:40.550 fused_ordering(533) 00:17:40.550 fused_ordering(534) 00:17:40.550 fused_ordering(535) 00:17:40.550 fused_ordering(536) 00:17:40.550 fused_ordering(537) 00:17:40.550 fused_ordering(538) 00:17:40.550 fused_ordering(539) 00:17:40.550 fused_ordering(540) 00:17:40.550 fused_ordering(541) 00:17:40.550 fused_ordering(542) 00:17:40.550 fused_ordering(543) 00:17:40.550 fused_ordering(544) 00:17:40.550 fused_ordering(545) 00:17:40.550 fused_ordering(546) 00:17:40.550 fused_ordering(547) 00:17:40.550 fused_ordering(548) 00:17:40.550 fused_ordering(549) 00:17:40.550 fused_ordering(550) 00:17:40.550 fused_ordering(551) 00:17:40.550 fused_ordering(552) 00:17:40.550 fused_ordering(553) 00:17:40.550 fused_ordering(554) 00:17:40.550 fused_ordering(555) 00:17:40.550 fused_ordering(556) 00:17:40.550 fused_ordering(557) 00:17:40.550 fused_ordering(558) 00:17:40.550 fused_ordering(559) 00:17:40.550 fused_ordering(560) 00:17:40.550 fused_ordering(561) 00:17:40.550 fused_ordering(562) 00:17:40.550 fused_ordering(563) 00:17:40.550 fused_ordering(564) 00:17:40.550 fused_ordering(565) 00:17:40.550 fused_ordering(566) 00:17:40.551 fused_ordering(567) 00:17:40.551 fused_ordering(568) 00:17:40.551 fused_ordering(569) 00:17:40.551 fused_ordering(570) 00:17:40.551 fused_ordering(571) 00:17:40.551 fused_ordering(572) 00:17:40.551 fused_ordering(573) 00:17:40.551 fused_ordering(574) 00:17:40.551 fused_ordering(575) 00:17:40.551 fused_ordering(576) 00:17:40.551 fused_ordering(577) 00:17:40.551 fused_ordering(578) 00:17:40.551 fused_ordering(579) 00:17:40.551 fused_ordering(580) 00:17:40.551 fused_ordering(581) 00:17:40.551 fused_ordering(582) 00:17:40.551 fused_ordering(583) 00:17:40.551 fused_ordering(584) 00:17:40.551 fused_ordering(585) 00:17:40.551 fused_ordering(586) 00:17:40.551 fused_ordering(587) 00:17:40.551 fused_ordering(588) 00:17:40.551 fused_ordering(589) 00:17:40.551 fused_ordering(590) 00:17:40.551 fused_ordering(591) 00:17:40.551 fused_ordering(592) 00:17:40.551 fused_ordering(593) 00:17:40.551 fused_ordering(594) 00:17:40.551 fused_ordering(595) 00:17:40.551 fused_ordering(596) 00:17:40.551 fused_ordering(597) 00:17:40.551 fused_ordering(598) 00:17:40.551 fused_ordering(599) 00:17:40.551 fused_ordering(600) 00:17:40.551 fused_ordering(601) 00:17:40.551 fused_ordering(602) 00:17:40.551 fused_ordering(603) 00:17:40.551 fused_ordering(604) 00:17:40.551 fused_ordering(605) 00:17:40.551 fused_ordering(606) 00:17:40.551 fused_ordering(607) 00:17:40.551 fused_ordering(608) 00:17:40.551 fused_ordering(609) 00:17:40.551 fused_ordering(610) 00:17:40.551 fused_ordering(611) 00:17:40.551 fused_ordering(612) 00:17:40.551 fused_ordering(613) 00:17:40.551 fused_ordering(614) 00:17:40.551 fused_ordering(615) 00:17:40.812 fused_ordering(616) 00:17:40.812 fused_ordering(617) 00:17:40.812 fused_ordering(618) 00:17:40.812 fused_ordering(619) 00:17:40.812 fused_ordering(620) 00:17:40.812 fused_ordering(621) 00:17:40.812 fused_ordering(622) 00:17:40.812 fused_ordering(623) 00:17:40.812 fused_ordering(624) 00:17:40.812 fused_ordering(625) 00:17:40.812 fused_ordering(626) 00:17:40.812 fused_ordering(627) 00:17:40.812 fused_ordering(628) 00:17:40.812 fused_ordering(629) 00:17:40.812 fused_ordering(630) 00:17:40.812 fused_ordering(631) 00:17:40.812 fused_ordering(632) 00:17:40.812 fused_ordering(633) 00:17:40.812 fused_ordering(634) 00:17:40.812 fused_ordering(635) 00:17:40.812 fused_ordering(636) 00:17:40.812 fused_ordering(637) 00:17:40.812 fused_ordering(638) 00:17:40.812 fused_ordering(639) 00:17:40.812 fused_ordering(640) 00:17:40.812 fused_ordering(641) 00:17:40.812 fused_ordering(642) 00:17:40.812 fused_ordering(643) 00:17:40.812 fused_ordering(644) 00:17:40.812 fused_ordering(645) 00:17:40.812 fused_ordering(646) 00:17:40.812 fused_ordering(647) 00:17:40.812 fused_ordering(648) 00:17:40.812 fused_ordering(649) 00:17:40.812 fused_ordering(650) 00:17:40.812 fused_ordering(651) 00:17:40.812 fused_ordering(652) 00:17:40.812 fused_ordering(653) 00:17:40.812 fused_ordering(654) 00:17:40.812 fused_ordering(655) 00:17:40.812 fused_ordering(656) 00:17:40.812 fused_ordering(657) 00:17:40.812 fused_ordering(658) 00:17:40.812 fused_ordering(659) 00:17:40.812 fused_ordering(660) 00:17:40.812 fused_ordering(661) 00:17:40.812 fused_ordering(662) 00:17:40.812 fused_ordering(663) 00:17:40.812 fused_ordering(664) 00:17:40.812 fused_ordering(665) 00:17:40.812 fused_ordering(666) 00:17:40.812 fused_ordering(667) 00:17:40.812 fused_ordering(668) 00:17:40.812 fused_ordering(669) 00:17:40.812 fused_ordering(670) 00:17:40.812 fused_ordering(671) 00:17:40.812 fused_ordering(672) 00:17:40.812 fused_ordering(673) 00:17:40.812 fused_ordering(674) 00:17:40.812 fused_ordering(675) 00:17:40.812 fused_ordering(676) 00:17:40.812 fused_ordering(677) 00:17:40.812 fused_ordering(678) 00:17:40.812 fused_ordering(679) 00:17:40.812 fused_ordering(680) 00:17:40.812 fused_ordering(681) 00:17:40.812 fused_ordering(682) 00:17:40.812 fused_ordering(683) 00:17:40.812 fused_ordering(684) 00:17:40.812 fused_ordering(685) 00:17:40.812 fused_ordering(686) 00:17:40.812 fused_ordering(687) 00:17:40.812 fused_ordering(688) 00:17:40.812 fused_ordering(689) 00:17:40.812 fused_ordering(690) 00:17:40.812 fused_ordering(691) 00:17:40.812 fused_ordering(692) 00:17:40.812 fused_ordering(693) 00:17:40.812 fused_ordering(694) 00:17:40.812 fused_ordering(695) 00:17:40.812 fused_ordering(696) 00:17:40.812 fused_ordering(697) 00:17:40.812 fused_ordering(698) 00:17:40.812 fused_ordering(699) 00:17:40.812 fused_ordering(700) 00:17:40.812 fused_ordering(701) 00:17:40.812 fused_ordering(702) 00:17:40.812 fused_ordering(703) 00:17:40.812 fused_ordering(704) 00:17:40.812 fused_ordering(705) 00:17:40.812 fused_ordering(706) 00:17:40.812 fused_ordering(707) 00:17:40.812 fused_ordering(708) 00:17:40.812 fused_ordering(709) 00:17:40.812 fused_ordering(710) 00:17:40.812 fused_ordering(711) 00:17:40.812 fused_ordering(712) 00:17:40.812 fused_ordering(713) 00:17:40.812 fused_ordering(714) 00:17:40.812 fused_ordering(715) 00:17:40.812 fused_ordering(716) 00:17:40.812 fused_ordering(717) 00:17:40.812 fused_ordering(718) 00:17:40.812 fused_ordering(719) 00:17:40.812 fused_ordering(720) 00:17:40.812 fused_ordering(721) 00:17:40.812 fused_ordering(722) 00:17:40.812 fused_ordering(723) 00:17:40.812 fused_ordering(724) 00:17:40.812 fused_ordering(725) 00:17:40.812 fused_ordering(726) 00:17:40.812 fused_ordering(727) 00:17:40.812 fused_ordering(728) 00:17:40.812 fused_ordering(729) 00:17:40.812 fused_ordering(730) 00:17:40.812 fused_ordering(731) 00:17:40.812 fused_ordering(732) 00:17:40.812 fused_ordering(733) 00:17:40.812 fused_ordering(734) 00:17:40.812 fused_ordering(735) 00:17:40.812 fused_ordering(736) 00:17:40.812 fused_ordering(737) 00:17:40.812 fused_ordering(738) 00:17:40.812 fused_ordering(739) 00:17:40.812 fused_ordering(740) 00:17:40.812 fused_ordering(741) 00:17:40.812 fused_ordering(742) 00:17:40.812 fused_ordering(743) 00:17:40.812 fused_ordering(744) 00:17:40.812 fused_ordering(745) 00:17:40.812 fused_ordering(746) 00:17:40.812 fused_ordering(747) 00:17:40.812 fused_ordering(748) 00:17:40.812 fused_ordering(749) 00:17:40.812 fused_ordering(750) 00:17:40.812 fused_ordering(751) 00:17:40.812 fused_ordering(752) 00:17:40.812 fused_ordering(753) 00:17:40.812 fused_ordering(754) 00:17:40.812 fused_ordering(755) 00:17:40.812 fused_ordering(756) 00:17:40.812 fused_ordering(757) 00:17:40.812 fused_ordering(758) 00:17:40.812 fused_ordering(759) 00:17:40.812 fused_ordering(760) 00:17:40.812 fused_ordering(761) 00:17:40.812 fused_ordering(762) 00:17:40.812 fused_ordering(763) 00:17:40.812 fused_ordering(764) 00:17:40.812 fused_ordering(765) 00:17:40.812 fused_ordering(766) 00:17:40.812 fused_ordering(767) 00:17:40.812 fused_ordering(768) 00:17:40.812 fused_ordering(769) 00:17:40.812 fused_ordering(770) 00:17:40.812 fused_ordering(771) 00:17:40.812 fused_ordering(772) 00:17:40.812 fused_ordering(773) 00:17:40.812 fused_ordering(774) 00:17:40.812 fused_ordering(775) 00:17:40.812 fused_ordering(776) 00:17:40.812 fused_ordering(777) 00:17:40.812 fused_ordering(778) 00:17:40.812 fused_ordering(779) 00:17:40.812 fused_ordering(780) 00:17:40.812 fused_ordering(781) 00:17:40.812 fused_ordering(782) 00:17:40.812 fused_ordering(783) 00:17:40.812 fused_ordering(784) 00:17:40.812 fused_ordering(785) 00:17:40.812 fused_ordering(786) 00:17:40.812 fused_ordering(787) 00:17:40.812 fused_ordering(788) 00:17:40.812 fused_ordering(789) 00:17:40.812 fused_ordering(790) 00:17:40.812 fused_ordering(791) 00:17:40.812 fused_ordering(792) 00:17:40.812 fused_ordering(793) 00:17:40.812 fused_ordering(794) 00:17:40.812 fused_ordering(795) 00:17:40.812 fused_ordering(796) 00:17:40.812 fused_ordering(797) 00:17:40.812 fused_ordering(798) 00:17:40.812 fused_ordering(799) 00:17:40.812 fused_ordering(800) 00:17:40.812 fused_ordering(801) 00:17:40.812 fused_ordering(802) 00:17:40.812 fused_ordering(803) 00:17:40.812 fused_ordering(804) 00:17:40.812 fused_ordering(805) 00:17:40.812 fused_ordering(806) 00:17:40.812 fused_ordering(807) 00:17:40.812 fused_ordering(808) 00:17:40.812 fused_ordering(809) 00:17:40.812 fused_ordering(810) 00:17:40.812 fused_ordering(811) 00:17:40.812 fused_ordering(812) 00:17:40.812 fused_ordering(813) 00:17:40.812 fused_ordering(814) 00:17:40.812 fused_ordering(815) 00:17:40.812 fused_ordering(816) 00:17:40.812 fused_ordering(817) 00:17:40.812 fused_ordering(818) 00:17:40.812 fused_ordering(819) 00:17:40.812 fused_ordering(820) 00:17:41.383 fused_ordering(821) 00:17:41.383 fused_ordering(822) 00:17:41.383 fused_ordering(823) 00:17:41.383 fused_ordering(824) 00:17:41.383 fused_ordering(825) 00:17:41.383 fused_ordering(826) 00:17:41.383 fused_ordering(827) 00:17:41.383 fused_ordering(828) 00:17:41.383 fused_ordering(829) 00:17:41.383 fused_ordering(830) 00:17:41.383 fused_ordering(831) 00:17:41.383 fused_ordering(832) 00:17:41.383 fused_ordering(833) 00:17:41.383 fused_ordering(834) 00:17:41.383 fused_ordering(835) 00:17:41.383 fused_ordering(836) 00:17:41.383 fused_ordering(837) 00:17:41.383 fused_ordering(838) 00:17:41.383 fused_ordering(839) 00:17:41.383 fused_ordering(840) 00:17:41.383 fused_ordering(841) 00:17:41.383 fused_ordering(842) 00:17:41.383 fused_ordering(843) 00:17:41.383 fused_ordering(844) 00:17:41.383 fused_ordering(845) 00:17:41.383 fused_ordering(846) 00:17:41.383 fused_ordering(847) 00:17:41.383 fused_ordering(848) 00:17:41.383 fused_ordering(849) 00:17:41.383 fused_ordering(850) 00:17:41.383 fused_ordering(851) 00:17:41.383 fused_ordering(852) 00:17:41.383 fused_ordering(853) 00:17:41.383 fused_ordering(854) 00:17:41.383 fused_ordering(855) 00:17:41.383 fused_ordering(856) 00:17:41.383 fused_ordering(857) 00:17:41.383 fused_ordering(858) 00:17:41.383 fused_ordering(859) 00:17:41.383 fused_ordering(860) 00:17:41.383 fused_ordering(861) 00:17:41.383 fused_ordering(862) 00:17:41.383 fused_ordering(863) 00:17:41.383 fused_ordering(864) 00:17:41.383 fused_ordering(865) 00:17:41.383 fused_ordering(866) 00:17:41.383 fused_ordering(867) 00:17:41.383 fused_ordering(868) 00:17:41.383 fused_ordering(869) 00:17:41.383 fused_ordering(870) 00:17:41.383 fused_ordering(871) 00:17:41.383 fused_ordering(872) 00:17:41.383 fused_ordering(873) 00:17:41.383 fused_ordering(874) 00:17:41.383 fused_ordering(875) 00:17:41.383 fused_ordering(876) 00:17:41.383 fused_ordering(877) 00:17:41.383 fused_ordering(878) 00:17:41.383 fused_ordering(879) 00:17:41.383 fused_ordering(880) 00:17:41.383 fused_ordering(881) 00:17:41.383 fused_ordering(882) 00:17:41.383 fused_ordering(883) 00:17:41.383 fused_ordering(884) 00:17:41.383 fused_ordering(885) 00:17:41.383 fused_ordering(886) 00:17:41.383 fused_ordering(887) 00:17:41.383 fused_ordering(888) 00:17:41.383 fused_ordering(889) 00:17:41.383 fused_ordering(890) 00:17:41.383 fused_ordering(891) 00:17:41.383 fused_ordering(892) 00:17:41.383 fused_ordering(893) 00:17:41.383 fused_ordering(894) 00:17:41.383 fused_ordering(895) 00:17:41.383 fused_ordering(896) 00:17:41.383 fused_ordering(897) 00:17:41.383 fused_ordering(898) 00:17:41.383 fused_ordering(899) 00:17:41.383 fused_ordering(900) 00:17:41.383 fused_ordering(901) 00:17:41.383 fused_ordering(902) 00:17:41.383 fused_ordering(903) 00:17:41.383 fused_ordering(904) 00:17:41.383 fused_ordering(905) 00:17:41.383 fused_ordering(906) 00:17:41.383 fused_ordering(907) 00:17:41.383 fused_ordering(908) 00:17:41.383 fused_ordering(909) 00:17:41.383 fused_ordering(910) 00:17:41.383 fused_ordering(911) 00:17:41.383 fused_ordering(912) 00:17:41.383 fused_ordering(913) 00:17:41.383 fused_ordering(914) 00:17:41.383 fused_ordering(915) 00:17:41.383 fused_ordering(916) 00:17:41.383 fused_ordering(917) 00:17:41.383 fused_ordering(918) 00:17:41.383 fused_ordering(919) 00:17:41.383 fused_ordering(920) 00:17:41.383 fused_ordering(921) 00:17:41.383 fused_ordering(922) 00:17:41.383 fused_ordering(923) 00:17:41.383 fused_ordering(924) 00:17:41.383 fused_ordering(925) 00:17:41.383 fused_ordering(926) 00:17:41.383 fused_ordering(927) 00:17:41.383 fused_ordering(928) 00:17:41.383 fused_ordering(929) 00:17:41.383 fused_ordering(930) 00:17:41.383 fused_ordering(931) 00:17:41.383 fused_ordering(932) 00:17:41.383 fused_ordering(933) 00:17:41.383 fused_ordering(934) 00:17:41.383 fused_ordering(935) 00:17:41.383 fused_ordering(936) 00:17:41.383 fused_ordering(937) 00:17:41.383 fused_ordering(938) 00:17:41.383 fused_ordering(939) 00:17:41.383 fused_ordering(940) 00:17:41.383 fused_ordering(941) 00:17:41.383 fused_ordering(942) 00:17:41.383 fused_ordering(943) 00:17:41.383 fused_ordering(944) 00:17:41.383 fused_ordering(945) 00:17:41.383 fused_ordering(946) 00:17:41.383 fused_ordering(947) 00:17:41.383 fused_ordering(948) 00:17:41.383 fused_ordering(949) 00:17:41.383 fused_ordering(950) 00:17:41.383 fused_ordering(951) 00:17:41.383 fused_ordering(952) 00:17:41.383 fused_ordering(953) 00:17:41.383 fused_ordering(954) 00:17:41.383 fused_ordering(955) 00:17:41.383 fused_ordering(956) 00:17:41.383 fused_ordering(957) 00:17:41.383 fused_ordering(958) 00:17:41.383 fused_ordering(959) 00:17:41.383 fused_ordering(960) 00:17:41.383 fused_ordering(961) 00:17:41.383 fused_ordering(962) 00:17:41.383 fused_ordering(963) 00:17:41.383 fused_ordering(964) 00:17:41.383 fused_ordering(965) 00:17:41.383 fused_ordering(966) 00:17:41.383 fused_ordering(967) 00:17:41.383 fused_ordering(968) 00:17:41.383 fused_ordering(969) 00:17:41.383 fused_ordering(970) 00:17:41.383 fused_ordering(971) 00:17:41.383 fused_ordering(972) 00:17:41.383 fused_ordering(973) 00:17:41.383 fused_ordering(974) 00:17:41.383 fused_ordering(975) 00:17:41.383 fused_ordering(976) 00:17:41.383 fused_ordering(977) 00:17:41.383 fused_ordering(978) 00:17:41.383 fused_ordering(979) 00:17:41.383 fused_ordering(980) 00:17:41.383 fused_ordering(981) 00:17:41.383 fused_ordering(982) 00:17:41.383 fused_ordering(983) 00:17:41.383 fused_ordering(984) 00:17:41.383 fused_ordering(985) 00:17:41.383 fused_ordering(986) 00:17:41.383 fused_ordering(987) 00:17:41.383 fused_ordering(988) 00:17:41.383 fused_ordering(989) 00:17:41.383 fused_ordering(990) 00:17:41.383 fused_ordering(991) 00:17:41.383 fused_ordering(992) 00:17:41.383 fused_ordering(993) 00:17:41.383 fused_ordering(994) 00:17:41.383 fused_ordering(995) 00:17:41.383 fused_ordering(996) 00:17:41.383 fused_ordering(997) 00:17:41.383 fused_ordering(998) 00:17:41.383 fused_ordering(999) 00:17:41.383 fused_ordering(1000) 00:17:41.383 fused_ordering(1001) 00:17:41.383 fused_ordering(1002) 00:17:41.383 fused_ordering(1003) 00:17:41.383 fused_ordering(1004) 00:17:41.383 fused_ordering(1005) 00:17:41.383 fused_ordering(1006) 00:17:41.383 fused_ordering(1007) 00:17:41.383 fused_ordering(1008) 00:17:41.383 fused_ordering(1009) 00:17:41.383 fused_ordering(1010) 00:17:41.383 fused_ordering(1011) 00:17:41.383 fused_ordering(1012) 00:17:41.383 fused_ordering(1013) 00:17:41.383 fused_ordering(1014) 00:17:41.383 fused_ordering(1015) 00:17:41.383 fused_ordering(1016) 00:17:41.383 fused_ordering(1017) 00:17:41.383 fused_ordering(1018) 00:17:41.383 fused_ordering(1019) 00:17:41.383 fused_ordering(1020) 00:17:41.383 fused_ordering(1021) 00:17:41.383 fused_ordering(1022) 00:17:41.383 fused_ordering(1023) 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:41.383 rmmod nvme_tcp 00:17:41.383 rmmod nvme_fabrics 00:17:41.383 rmmod nvme_keyring 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 220743 ']' 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 220743 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 220743 ']' 00:17:41.383 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 220743 00:17:41.384 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:41.384 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.384 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 220743 00:17:41.643 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:41.643 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:41.643 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 220743' 00:17:41.643 killing process with pid 220743 00:17:41.644 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 220743 00:17:41.644 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 220743 00:17:41.644 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:41.644 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:41.644 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:41.644 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:41.644 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:41.644 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:41.644 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:41.644 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:41.644 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:41.644 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.644 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.644 20:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:44.187 00:17:44.187 real 0m7.339s 00:17:44.187 user 0m5.092s 00:17:44.187 sys 0m2.806s 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:44.187 ************************************ 00:17:44.187 END TEST nvmf_fused_ordering 00:17:44.187 ************************************ 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:44.187 ************************************ 00:17:44.187 START TEST nvmf_ns_masking 00:17:44.187 ************************************ 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:44.187 * Looking for test storage... 00:17:44.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:44.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.187 --rc genhtml_branch_coverage=1 00:17:44.187 --rc genhtml_function_coverage=1 00:17:44.187 --rc genhtml_legend=1 00:17:44.187 --rc geninfo_all_blocks=1 00:17:44.187 --rc geninfo_unexecuted_blocks=1 00:17:44.187 00:17:44.187 ' 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:44.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.187 --rc genhtml_branch_coverage=1 00:17:44.187 --rc genhtml_function_coverage=1 00:17:44.187 --rc genhtml_legend=1 00:17:44.187 --rc geninfo_all_blocks=1 00:17:44.187 --rc geninfo_unexecuted_blocks=1 00:17:44.187 00:17:44.187 ' 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:44.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.187 --rc genhtml_branch_coverage=1 00:17:44.187 --rc genhtml_function_coverage=1 00:17:44.187 --rc genhtml_legend=1 00:17:44.187 --rc geninfo_all_blocks=1 00:17:44.187 --rc geninfo_unexecuted_blocks=1 00:17:44.187 00:17:44.187 ' 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:44.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.187 --rc genhtml_branch_coverage=1 00:17:44.187 --rc genhtml_function_coverage=1 00:17:44.187 --rc genhtml_legend=1 00:17:44.187 --rc geninfo_all_blocks=1 00:17:44.187 --rc geninfo_unexecuted_blocks=1 00:17:44.187 00:17:44.187 ' 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.187 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:44.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=621bad34-4b2c-46dc-a414-5f0516420195 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=37ee8e08-e294-4ff2-8dc1-5157bcecae67 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=61c4d470-b738-4430-89ce-1d70844490f5 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:44.188 20:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:46.097 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:46.097 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:46.097 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:46.097 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:46.097 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.098 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:46.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:17:46.358 00:17:46.358 --- 10.0.0.2 ping statistics --- 00:17:46.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.358 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:17:46.358 00:17:46.358 --- 10.0.0.1 ping statistics --- 00:17:46.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.358 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=223092 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 223092 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 223092 ']' 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.358 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:46.358 [2024-11-18 20:18:58.245785] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:46.358 [2024-11-18 20:18:58.245876] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.358 [2024-11-18 20:18:58.316104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.358 [2024-11-18 20:18:58.359661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.358 [2024-11-18 20:18:58.359720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.358 [2024-11-18 20:18:58.359746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.358 [2024-11-18 20:18:58.359757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.358 [2024-11-18 20:18:58.359767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.358 [2024-11-18 20:18:58.360516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.617 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.617 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:46.617 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:46.617 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:46.617 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:46.617 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.617 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:46.875 [2024-11-18 20:18:58.802925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.875 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:46.875 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:46.875 20:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:47.135 Malloc1 00:17:47.135 20:18:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:47.703 Malloc2 00:17:47.703 20:18:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:47.961 20:18:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:48.220 20:19:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:48.480 [2024-11-18 20:19:00.317299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.480 20:19:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:48.480 20:19:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 61c4d470-b738-4430-89ce-1d70844490f5 -a 10.0.0.2 -s 4420 -i 4 00:17:48.741 20:19:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:48.741 20:19:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:48.741 20:19:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.741 20:19:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:48.741 20:19:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:50.651 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:50.652 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:50.652 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:50.652 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:50.652 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.652 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:50.652 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:50.652 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:50.652 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:50.652 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:50.652 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:50.652 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.652 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:50.652 [ 0]:0x1 00:17:50.652 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:50.652 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.910 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce7b8443d5a14397a03858336209f3d9 00:17:50.910 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce7b8443d5a14397a03858336209f3d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.910 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:51.169 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:51.169 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:51.169 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:51.169 [ 0]:0x1 00:17:51.169 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:51.169 20:19:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:51.169 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce7b8443d5a14397a03858336209f3d9 00:17:51.169 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce7b8443d5a14397a03858336209f3d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:51.169 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:51.169 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:51.169 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:51.169 [ 1]:0x2 00:17:51.169 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:51.169 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:51.169 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9aa7bfcbf01041be9534e1244d912299 00:17:51.169 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9aa7bfcbf01041be9534e1244d912299 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:51.169 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:51.169 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.169 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:51.427 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:51.686 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:51.686 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 61c4d470-b738-4430-89ce-1d70844490f5 -a 10.0.0.2 -s 4420 -i 4 00:17:51.946 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:51.946 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:51.946 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:51.946 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:51.946 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:51.946 20:19:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:53.857 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:53.857 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:53.857 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:53.857 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:53.857 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:53.857 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:53.857 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:53.857 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.116 20:19:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:54.116 [ 0]:0x2 00:17:54.116 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:54.116 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.116 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9aa7bfcbf01041be9534e1244d912299 00:17:54.116 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9aa7bfcbf01041be9534e1244d912299 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.116 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:54.685 [ 0]:0x1 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce7b8443d5a14397a03858336209f3d9 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce7b8443d5a14397a03858336209f3d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:54.685 [ 1]:0x2 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9aa7bfcbf01041be9534e1244d912299 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9aa7bfcbf01041be9534e1244d912299 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.685 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:54.943 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:54.943 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:54.943 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:54.943 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:54.943 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.943 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:54.943 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.943 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:54.943 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.943 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:54.943 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:54.944 [ 0]:0x2 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9aa7bfcbf01041be9534e1244d912299 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9aa7bfcbf01041be9534e1244d912299 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:54.944 20:19:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:55.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.202 20:19:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:55.461 20:19:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:55.461 20:19:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 61c4d470-b738-4430-89ce-1d70844490f5 -a 10.0.0.2 -s 4420 -i 4 00:17:55.461 20:19:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:55.461 20:19:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:55.461 20:19:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.461 20:19:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:55.461 20:19:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:55.461 20:19:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:57.995 [ 0]:0x1 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce7b8443d5a14397a03858336209f3d9 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce7b8443d5a14397a03858336209f3d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:57.995 [ 1]:0x2 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9aa7bfcbf01041be9534e1244d912299 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9aa7bfcbf01041be9534e1244d912299 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.995 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:57.996 [ 0]:0x2 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9aa7bfcbf01041be9534e1244d912299 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9aa7bfcbf01041be9534e1244d912299 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:57.996 20:19:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:58.564 [2024-11-18 20:19:10.299207] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:58.564 request: 00:17:58.564 { 00:17:58.564 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.564 "nsid": 2, 00:17:58.564 "host": "nqn.2016-06.io.spdk:host1", 00:17:58.564 "method": "nvmf_ns_remove_host", 00:17:58.564 "req_id": 1 00:17:58.564 } 00:17:58.564 Got JSON-RPC error response 00:17:58.564 response: 00:17:58.564 { 00:17:58.564 "code": -32602, 00:17:58.564 "message": "Invalid parameters" 00:17:58.564 } 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:58.564 [ 0]:0x2 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9aa7bfcbf01041be9534e1244d912299 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9aa7bfcbf01041be9534e1244d912299 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:58.564 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.823 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=224712 00:17:58.823 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:58.823 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.823 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 224712 /var/tmp/host.sock 00:17:58.823 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 224712 ']' 00:17:58.823 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:58.823 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.823 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:58.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:58.823 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.823 20:19:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:58.823 [2024-11-18 20:19:10.633144] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:58.823 [2024-11-18 20:19:10.633227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224712 ] 00:17:58.823 [2024-11-18 20:19:10.700004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.823 [2024-11-18 20:19:10.745020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.081 20:19:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.081 20:19:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:59.081 20:19:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:59.339 20:19:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:59.598 20:19:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 621bad34-4b2c-46dc-a414-5f0516420195 00:17:59.598 20:19:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:59.598 20:19:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 621BAD344B2C46DCA4145F0516420195 -i 00:17:59.856 20:19:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 37ee8e08-e294-4ff2-8dc1-5157bcecae67 00:17:59.856 20:19:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:59.856 20:19:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 37EE8E08E2944FF28DC15157BCECAE67 -i 00:18:00.114 20:19:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:00.373 20:19:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:00.632 20:19:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:00.632 20:19:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:01.201 nvme0n1 00:18:01.201 20:19:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:01.201 20:19:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:01.459 nvme1n2 00:18:01.459 20:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:01.459 20:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:01.459 20:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:01.459 20:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:01.459 20:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:01.718 20:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:01.718 20:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:01.718 20:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:01.718 20:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:01.978 20:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 621bad34-4b2c-46dc-a414-5f0516420195 == \6\2\1\b\a\d\3\4\-\4\b\2\c\-\4\6\d\c\-\a\4\1\4\-\5\f\0\5\1\6\4\2\0\1\9\5 ]] 00:18:02.238 20:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:02.239 20:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:02.239 20:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:02.498 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 37ee8e08-e294-4ff2-8dc1-5157bcecae67 == \3\7\e\e\8\e\0\8\-\e\2\9\4\-\4\f\f\2\-\8\d\c\1\-\5\1\5\7\b\c\e\c\a\e\6\7 ]] 00:18:02.498 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:02.757 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:03.021 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 621bad34-4b2c-46dc-a414-5f0516420195 00:18:03.021 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:03.021 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 621BAD344B2C46DCA4145F0516420195 00:18:03.021 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:03.021 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 621BAD344B2C46DCA4145F0516420195 00:18:03.021 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.021 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.021 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.021 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.021 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.021 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.021 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.021 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:03.021 20:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 621BAD344B2C46DCA4145F0516420195 00:18:03.280 [2024-11-18 20:19:15.073098] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:03.280 [2024-11-18 20:19:15.073140] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:03.280 [2024-11-18 20:19:15.073170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.280 request: 00:18:03.280 { 00:18:03.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.280 "namespace": { 00:18:03.280 "bdev_name": "invalid", 00:18:03.280 "nsid": 1, 00:18:03.280 "nguid": "621BAD344B2C46DCA4145F0516420195", 00:18:03.280 "no_auto_visible": false 00:18:03.280 }, 00:18:03.280 "method": "nvmf_subsystem_add_ns", 00:18:03.280 "req_id": 1 00:18:03.280 } 00:18:03.280 Got JSON-RPC error response 00:18:03.280 response: 00:18:03.280 { 00:18:03.280 "code": -32602, 00:18:03.280 "message": "Invalid parameters" 00:18:03.280 } 00:18:03.280 20:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:03.280 20:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:03.280 20:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:03.280 20:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:03.280 20:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 621bad34-4b2c-46dc-a414-5f0516420195 00:18:03.280 20:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:03.280 20:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 621BAD344B2C46DCA4145F0516420195 -i 00:18:03.539 20:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:05.448 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:05.448 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:05.448 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:05.707 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:05.707 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 224712 00:18:05.707 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 224712 ']' 00:18:05.707 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 224712 00:18:05.707 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:05.707 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.707 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 224712 00:18:05.707 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:05.707 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:05.707 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 224712' 00:18:05.707 killing process with pid 224712 00:18:05.707 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 224712 00:18:05.707 20:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 224712 00:18:06.276 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:06.537 rmmod nvme_tcp 00:18:06.537 rmmod nvme_fabrics 00:18:06.537 rmmod nvme_keyring 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 223092 ']' 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 223092 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 223092 ']' 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 223092 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 223092 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 223092' 00:18:06.537 killing process with pid 223092 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 223092 00:18:06.537 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 223092 00:18:06.798 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:06.798 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:06.798 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:06.798 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:06.798 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:06.798 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:06.798 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:06.798 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:06.798 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:06.798 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.798 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.798 20:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.707 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:08.966 00:18:08.966 real 0m25.006s 00:18:08.966 user 0m36.107s 00:18:08.966 sys 0m4.794s 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:08.966 ************************************ 00:18:08.966 END TEST nvmf_ns_masking 00:18:08.966 ************************************ 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:08.966 ************************************ 00:18:08.966 START TEST nvmf_nvme_cli 00:18:08.966 ************************************ 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:08.966 * Looking for test storage... 00:18:08.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.966 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:08.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.967 --rc genhtml_branch_coverage=1 00:18:08.967 --rc genhtml_function_coverage=1 00:18:08.967 --rc genhtml_legend=1 00:18:08.967 --rc geninfo_all_blocks=1 00:18:08.967 --rc geninfo_unexecuted_blocks=1 00:18:08.967 00:18:08.967 ' 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:08.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.967 --rc genhtml_branch_coverage=1 00:18:08.967 --rc genhtml_function_coverage=1 00:18:08.967 --rc genhtml_legend=1 00:18:08.967 --rc geninfo_all_blocks=1 00:18:08.967 --rc geninfo_unexecuted_blocks=1 00:18:08.967 00:18:08.967 ' 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:08.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.967 --rc genhtml_branch_coverage=1 00:18:08.967 --rc genhtml_function_coverage=1 00:18:08.967 --rc genhtml_legend=1 00:18:08.967 --rc geninfo_all_blocks=1 00:18:08.967 --rc geninfo_unexecuted_blocks=1 00:18:08.967 00:18:08.967 ' 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:08.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.967 --rc genhtml_branch_coverage=1 00:18:08.967 --rc genhtml_function_coverage=1 00:18:08.967 --rc genhtml_legend=1 00:18:08.967 --rc geninfo_all_blocks=1 00:18:08.967 --rc geninfo_unexecuted_blocks=1 00:18:08.967 00:18:08.967 ' 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.967 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:08.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:08.968 20:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:11.618 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:11.618 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:11.618 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:11.618 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:11.618 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:11.619 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:11.619 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:11.619 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:11.619 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:11.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:18:11.619 00:18:11.619 --- 10.0.0.2 ping statistics --- 00:18:11.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.619 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:18:11.619 00:18:11.619 --- 10.0.0.1 ping statistics --- 00:18:11.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.619 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:11.619 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=227616 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 227616 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 227616 ']' 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:11.620 [2024-11-18 20:19:23.215309] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:18:11.620 [2024-11-18 20:19:23.215382] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.620 [2024-11-18 20:19:23.289547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:11.620 [2024-11-18 20:19:23.339251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.620 [2024-11-18 20:19:23.339310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.620 [2024-11-18 20:19:23.339339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.620 [2024-11-18 20:19:23.339351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.620 [2024-11-18 20:19:23.339361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.620 [2024-11-18 20:19:23.341094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.620 [2024-11-18 20:19:23.341152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.620 [2024-11-18 20:19:23.341219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.620 [2024-11-18 20:19:23.341222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:11.620 [2024-11-18 20:19:23.495230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:11.620 Malloc0 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:11.620 Malloc1 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:11.620 [2024-11-18 20:19:23.592218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.620 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:11.903 00:18:11.903 Discovery Log Number of Records 2, Generation counter 2 00:18:11.903 =====Discovery Log Entry 0====== 00:18:11.903 trtype: tcp 00:18:11.903 adrfam: ipv4 00:18:11.904 subtype: current discovery subsystem 00:18:11.904 treq: not required 00:18:11.904 portid: 0 00:18:11.904 trsvcid: 4420 00:18:11.904 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:11.904 traddr: 10.0.0.2 00:18:11.904 eflags: explicit discovery connections, duplicate discovery information 00:18:11.904 sectype: none 00:18:11.904 =====Discovery Log Entry 1====== 00:18:11.904 trtype: tcp 00:18:11.904 adrfam: ipv4 00:18:11.904 subtype: nvme subsystem 00:18:11.904 treq: not required 00:18:11.904 portid: 0 00:18:11.904 trsvcid: 4420 00:18:11.904 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:11.904 traddr: 10.0.0.2 00:18:11.904 eflags: none 00:18:11.904 sectype: none 00:18:11.904 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:11.904 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:11.904 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:11.904 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:11.904 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:11.904 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:11.904 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:11.904 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:11.904 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:11.904 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:11.904 20:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.548 20:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:12.548 20:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:12.548 20:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.548 20:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:12.548 20:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:12.548 20:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:14.452 /dev/nvme0n2 ]] 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:14.452 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:14.453 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:14.453 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:14.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:14.713 rmmod nvme_tcp 00:18:14.713 rmmod nvme_fabrics 00:18:14.713 rmmod nvme_keyring 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 227616 ']' 00:18:14.713 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 227616 00:18:14.714 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 227616 ']' 00:18:14.714 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 227616 00:18:14.714 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:14.714 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.714 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227616 00:18:14.714 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.714 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.714 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227616' 00:18:14.714 killing process with pid 227616 00:18:14.714 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 227616 00:18:14.714 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 227616 00:18:14.973 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:14.973 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:14.973 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:14.973 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:14.973 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:14.973 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:14.973 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:14.973 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:14.973 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:14.973 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.973 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.973 20:19:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.524 20:19:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:17.524 00:18:17.524 real 0m8.213s 00:18:17.524 user 0m14.976s 00:18:17.524 sys 0m2.276s 00:18:17.524 20:19:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.524 20:19:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.524 ************************************ 00:18:17.524 END TEST nvmf_nvme_cli 00:18:17.524 ************************************ 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:17.524 ************************************ 00:18:17.524 START TEST nvmf_vfio_user 00:18:17.524 ************************************ 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:17.524 * Looking for test storage... 00:18:17.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:17.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.524 --rc genhtml_branch_coverage=1 00:18:17.524 --rc genhtml_function_coverage=1 00:18:17.524 --rc genhtml_legend=1 00:18:17.524 --rc geninfo_all_blocks=1 00:18:17.524 --rc geninfo_unexecuted_blocks=1 00:18:17.524 00:18:17.524 ' 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:17.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.524 --rc genhtml_branch_coverage=1 00:18:17.524 --rc genhtml_function_coverage=1 00:18:17.524 --rc genhtml_legend=1 00:18:17.524 --rc geninfo_all_blocks=1 00:18:17.524 --rc geninfo_unexecuted_blocks=1 00:18:17.524 00:18:17.524 ' 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:17.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.524 --rc genhtml_branch_coverage=1 00:18:17.524 --rc genhtml_function_coverage=1 00:18:17.524 --rc genhtml_legend=1 00:18:17.524 --rc geninfo_all_blocks=1 00:18:17.524 --rc geninfo_unexecuted_blocks=1 00:18:17.524 00:18:17.524 ' 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:17.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.524 --rc genhtml_branch_coverage=1 00:18:17.524 --rc genhtml_function_coverage=1 00:18:17.524 --rc genhtml_legend=1 00:18:17.524 --rc geninfo_all_blocks=1 00:18:17.524 --rc geninfo_unexecuted_blocks=1 00:18:17.524 00:18:17.524 ' 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.524 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:17.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=228434 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 228434' 00:18:17.525 Process pid: 228434 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 228434 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 228434 ']' 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:17.525 [2024-11-18 20:19:29.234596] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:18:17.525 [2024-11-18 20:19:29.234712] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.525 [2024-11-18 20:19:29.299671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.525 [2024-11-18 20:19:29.346119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.525 [2024-11-18 20:19:29.346186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.525 [2024-11-18 20:19:29.346199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.525 [2024-11-18 20:19:29.346223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.525 [2024-11-18 20:19:29.346233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.525 [2024-11-18 20:19:29.347595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.525 [2024-11-18 20:19:29.347661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.525 [2024-11-18 20:19:29.347728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.525 [2024-11-18 20:19:29.347731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:17.525 20:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:18.463 20:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:19.029 20:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:19.029 20:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:19.029 20:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:19.029 20:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:19.029 20:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:19.289 Malloc1 00:18:19.289 20:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:19.548 20:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:19.806 20:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:20.065 20:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:20.065 20:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:20.065 20:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:20.323 Malloc2 00:18:20.323 20:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:20.582 20:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:20.840 20:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:21.100 20:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:21.100 20:19:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:21.100 20:19:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:21.100 20:19:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:21.100 20:19:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:21.100 20:19:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:21.100 [2024-11-18 20:19:33.023198] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:18:21.100 [2024-11-18 20:19:33.023239] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228857 ] 00:18:21.100 [2024-11-18 20:19:33.073654] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:21.100 [2024-11-18 20:19:33.083122] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:21.100 [2024-11-18 20:19:33.083151] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe5bf91f000 00:18:21.100 [2024-11-18 20:19:33.084128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:21.100 [2024-11-18 20:19:33.085108] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:21.100 [2024-11-18 20:19:33.086116] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:21.100 [2024-11-18 20:19:33.087121] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:21.100 [2024-11-18 20:19:33.088123] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:21.100 [2024-11-18 20:19:33.089131] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:21.100 [2024-11-18 20:19:33.090140] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:21.100 [2024-11-18 20:19:33.091144] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:21.100 [2024-11-18 20:19:33.092162] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:21.100 [2024-11-18 20:19:33.092181] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe5be617000 00:18:21.100 [2024-11-18 20:19:33.093303] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:21.364 [2024-11-18 20:19:33.108945] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:21.364 [2024-11-18 20:19:33.108993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:21.364 [2024-11-18 20:19:33.111275] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:21.364 [2024-11-18 20:19:33.111329] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:21.364 [2024-11-18 20:19:33.111419] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:21.364 [2024-11-18 20:19:33.111451] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:21.364 [2024-11-18 20:19:33.111462] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:21.364 [2024-11-18 20:19:33.112648] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:21.364 [2024-11-18 20:19:33.112670] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:21.364 [2024-11-18 20:19:33.112682] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:21.364 [2024-11-18 20:19:33.113278] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:21.364 [2024-11-18 20:19:33.113298] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:21.364 [2024-11-18 20:19:33.113311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:21.364 [2024-11-18 20:19:33.114279] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:21.364 [2024-11-18 20:19:33.114298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:21.364 [2024-11-18 20:19:33.115285] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:21.364 [2024-11-18 20:19:33.115303] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:21.364 [2024-11-18 20:19:33.115312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:21.364 [2024-11-18 20:19:33.115328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:21.364 [2024-11-18 20:19:33.115438] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:21.364 [2024-11-18 20:19:33.115446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:21.364 [2024-11-18 20:19:33.115456] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:21.364 [2024-11-18 20:19:33.116299] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:21.364 [2024-11-18 20:19:33.119647] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:21.364 [2024-11-18 20:19:33.120327] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:21.364 [2024-11-18 20:19:33.121311] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:21.364 [2024-11-18 20:19:33.121421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:21.364 [2024-11-18 20:19:33.122325] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:21.364 [2024-11-18 20:19:33.122343] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:21.364 [2024-11-18 20:19:33.122352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:21.364 [2024-11-18 20:19:33.122375] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:21.364 [2024-11-18 20:19:33.122395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.122424] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:21.365 [2024-11-18 20:19:33.122434] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:21.365 [2024-11-18 20:19:33.122441] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:21.365 [2024-11-18 20:19:33.122460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:21.365 [2024-11-18 20:19:33.122525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:21.365 [2024-11-18 20:19:33.122543] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:21.365 [2024-11-18 20:19:33.122551] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:21.365 [2024-11-18 20:19:33.122558] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:21.365 [2024-11-18 20:19:33.122566] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:21.365 [2024-11-18 20:19:33.122578] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:21.365 [2024-11-18 20:19:33.122587] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:21.365 [2024-11-18 20:19:33.122598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.122615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.122654] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:21.365 [2024-11-18 20:19:33.122671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:21.365 [2024-11-18 20:19:33.122704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.365 [2024-11-18 20:19:33.122717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.365 [2024-11-18 20:19:33.122729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.365 [2024-11-18 20:19:33.122741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.365 [2024-11-18 20:19:33.122750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.122763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.122776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:21.365 [2024-11-18 20:19:33.122789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:21.365 [2024-11-18 20:19:33.122805] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:21.365 [2024-11-18 20:19:33.122814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.122826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.122837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.122850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:21.365 [2024-11-18 20:19:33.122864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:21.365 [2024-11-18 20:19:33.122947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.122965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.122979] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:21.365 [2024-11-18 20:19:33.123002] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:21.365 [2024-11-18 20:19:33.123008] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:21.365 [2024-11-18 20:19:33.123017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:21.365 [2024-11-18 20:19:33.123033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:21.365 [2024-11-18 20:19:33.123055] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:21.365 [2024-11-18 20:19:33.123075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.123091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.123103] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:21.365 [2024-11-18 20:19:33.123111] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:21.365 [2024-11-18 20:19:33.123116] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:21.365 [2024-11-18 20:19:33.123125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:21.365 [2024-11-18 20:19:33.123154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:21.365 [2024-11-18 20:19:33.123177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.123192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.123204] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:21.365 [2024-11-18 20:19:33.123212] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:21.365 [2024-11-18 20:19:33.123217] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:21.365 [2024-11-18 20:19:33.123226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:21.365 [2024-11-18 20:19:33.123242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:21.365 [2024-11-18 20:19:33.123256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.123267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.123281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.123292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.123300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.123308] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.123316] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:21.365 [2024-11-18 20:19:33.123323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:21.365 [2024-11-18 20:19:33.123331] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:21.365 [2024-11-18 20:19:33.123359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:21.365 [2024-11-18 20:19:33.123381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:21.365 [2024-11-18 20:19:33.123400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:21.365 [2024-11-18 20:19:33.123412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:21.365 [2024-11-18 20:19:33.123427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:21.365 [2024-11-18 20:19:33.123439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:21.365 [2024-11-18 20:19:33.123454] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:21.365 [2024-11-18 20:19:33.123465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:21.365 [2024-11-18 20:19:33.123486] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:21.365 [2024-11-18 20:19:33.123496] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:21.366 [2024-11-18 20:19:33.123502] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:21.366 [2024-11-18 20:19:33.123508] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:21.366 [2024-11-18 20:19:33.123514] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:21.366 [2024-11-18 20:19:33.123523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:21.366 [2024-11-18 20:19:33.123534] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:21.366 [2024-11-18 20:19:33.123541] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:21.366 [2024-11-18 20:19:33.123547] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:21.366 [2024-11-18 20:19:33.123555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:21.366 [2024-11-18 20:19:33.123565] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:21.366 [2024-11-18 20:19:33.123573] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:21.366 [2024-11-18 20:19:33.123578] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:21.366 [2024-11-18 20:19:33.123587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:21.366 [2024-11-18 20:19:33.123599] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:21.366 [2024-11-18 20:19:33.123606] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:21.366 [2024-11-18 20:19:33.123612] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:21.366 [2024-11-18 20:19:33.123643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:21.366 [2024-11-18 20:19:33.123659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:21.366 [2024-11-18 20:19:33.123697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:21.366 [2024-11-18 20:19:33.123717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:21.366 [2024-11-18 20:19:33.123732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:21.366 ===================================================== 00:18:21.366 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:21.366 ===================================================== 00:18:21.366 Controller Capabilities/Features 00:18:21.366 ================================ 00:18:21.366 Vendor ID: 4e58 00:18:21.366 Subsystem Vendor ID: 4e58 00:18:21.366 Serial Number: SPDK1 00:18:21.366 Model Number: SPDK bdev Controller 00:18:21.366 Firmware Version: 25.01 00:18:21.366 Recommended Arb Burst: 6 00:18:21.366 IEEE OUI Identifier: 8d 6b 50 00:18:21.366 Multi-path I/O 00:18:21.366 May have multiple subsystem ports: Yes 00:18:21.366 May have multiple controllers: Yes 00:18:21.366 Associated with SR-IOV VF: No 00:18:21.366 Max Data Transfer Size: 131072 00:18:21.366 Max Number of Namespaces: 32 00:18:21.366 Max Number of I/O Queues: 127 00:18:21.366 NVMe Specification Version (VS): 1.3 00:18:21.366 NVMe Specification Version (Identify): 1.3 00:18:21.366 Maximum Queue Entries: 256 00:18:21.366 Contiguous Queues Required: Yes 00:18:21.366 Arbitration Mechanisms Supported 00:18:21.366 Weighted Round Robin: Not Supported 00:18:21.366 Vendor Specific: Not Supported 00:18:21.366 Reset Timeout: 15000 ms 00:18:21.366 Doorbell Stride: 4 bytes 00:18:21.366 NVM Subsystem Reset: Not Supported 00:18:21.366 Command Sets Supported 00:18:21.366 NVM Command Set: Supported 00:18:21.366 Boot Partition: Not Supported 00:18:21.366 Memory Page Size Minimum: 4096 bytes 00:18:21.366 Memory Page Size Maximum: 4096 bytes 00:18:21.366 Persistent Memory Region: Not Supported 00:18:21.366 Optional Asynchronous Events Supported 00:18:21.366 Namespace Attribute Notices: Supported 00:18:21.366 Firmware Activation Notices: Not Supported 00:18:21.366 ANA Change Notices: Not Supported 00:18:21.366 PLE Aggregate Log Change Notices: Not Supported 00:18:21.366 LBA Status Info Alert Notices: Not Supported 00:18:21.366 EGE Aggregate Log Change Notices: Not Supported 00:18:21.366 Normal NVM Subsystem Shutdown event: Not Supported 00:18:21.366 Zone Descriptor Change Notices: Not Supported 00:18:21.366 Discovery Log Change Notices: Not Supported 00:18:21.366 Controller Attributes 00:18:21.366 128-bit Host Identifier: Supported 00:18:21.366 Non-Operational Permissive Mode: Not Supported 00:18:21.366 NVM Sets: Not Supported 00:18:21.366 Read Recovery Levels: Not Supported 00:18:21.366 Endurance Groups: Not Supported 00:18:21.366 Predictable Latency Mode: Not Supported 00:18:21.366 Traffic Based Keep ALive: Not Supported 00:18:21.366 Namespace Granularity: Not Supported 00:18:21.366 SQ Associations: Not Supported 00:18:21.366 UUID List: Not Supported 00:18:21.366 Multi-Domain Subsystem: Not Supported 00:18:21.366 Fixed Capacity Management: Not Supported 00:18:21.366 Variable Capacity Management: Not Supported 00:18:21.366 Delete Endurance Group: Not Supported 00:18:21.366 Delete NVM Set: Not Supported 00:18:21.366 Extended LBA Formats Supported: Not Supported 00:18:21.366 Flexible Data Placement Supported: Not Supported 00:18:21.366 00:18:21.366 Controller Memory Buffer Support 00:18:21.366 ================================ 00:18:21.366 Supported: No 00:18:21.366 00:18:21.366 Persistent Memory Region Support 00:18:21.366 ================================ 00:18:21.366 Supported: No 00:18:21.366 00:18:21.366 Admin Command Set Attributes 00:18:21.366 ============================ 00:18:21.366 Security Send/Receive: Not Supported 00:18:21.366 Format NVM: Not Supported 00:18:21.366 Firmware Activate/Download: Not Supported 00:18:21.366 Namespace Management: Not Supported 00:18:21.366 Device Self-Test: Not Supported 00:18:21.366 Directives: Not Supported 00:18:21.366 NVMe-MI: Not Supported 00:18:21.366 Virtualization Management: Not Supported 00:18:21.366 Doorbell Buffer Config: Not Supported 00:18:21.366 Get LBA Status Capability: Not Supported 00:18:21.366 Command & Feature Lockdown Capability: Not Supported 00:18:21.366 Abort Command Limit: 4 00:18:21.366 Async Event Request Limit: 4 00:18:21.366 Number of Firmware Slots: N/A 00:18:21.366 Firmware Slot 1 Read-Only: N/A 00:18:21.366 Firmware Activation Without Reset: N/A 00:18:21.366 Multiple Update Detection Support: N/A 00:18:21.366 Firmware Update Granularity: No Information Provided 00:18:21.366 Per-Namespace SMART Log: No 00:18:21.366 Asymmetric Namespace Access Log Page: Not Supported 00:18:21.366 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:21.366 Command Effects Log Page: Supported 00:18:21.366 Get Log Page Extended Data: Supported 00:18:21.366 Telemetry Log Pages: Not Supported 00:18:21.366 Persistent Event Log Pages: Not Supported 00:18:21.366 Supported Log Pages Log Page: May Support 00:18:21.366 Commands Supported & Effects Log Page: Not Supported 00:18:21.366 Feature Identifiers & Effects Log Page:May Support 00:18:21.366 NVMe-MI Commands & Effects Log Page: May Support 00:18:21.366 Data Area 4 for Telemetry Log: Not Supported 00:18:21.366 Error Log Page Entries Supported: 128 00:18:21.366 Keep Alive: Supported 00:18:21.366 Keep Alive Granularity: 10000 ms 00:18:21.366 00:18:21.366 NVM Command Set Attributes 00:18:21.366 ========================== 00:18:21.367 Submission Queue Entry Size 00:18:21.367 Max: 64 00:18:21.367 Min: 64 00:18:21.367 Completion Queue Entry Size 00:18:21.367 Max: 16 00:18:21.367 Min: 16 00:18:21.367 Number of Namespaces: 32 00:18:21.367 Compare Command: Supported 00:18:21.367 Write Uncorrectable Command: Not Supported 00:18:21.367 Dataset Management Command: Supported 00:18:21.367 Write Zeroes Command: Supported 00:18:21.367 Set Features Save Field: Not Supported 00:18:21.367 Reservations: Not Supported 00:18:21.367 Timestamp: Not Supported 00:18:21.367 Copy: Supported 00:18:21.367 Volatile Write Cache: Present 00:18:21.367 Atomic Write Unit (Normal): 1 00:18:21.367 Atomic Write Unit (PFail): 1 00:18:21.367 Atomic Compare & Write Unit: 1 00:18:21.367 Fused Compare & Write: Supported 00:18:21.367 Scatter-Gather List 00:18:21.367 SGL Command Set: Supported (Dword aligned) 00:18:21.367 SGL Keyed: Not Supported 00:18:21.367 SGL Bit Bucket Descriptor: Not Supported 00:18:21.367 SGL Metadata Pointer: Not Supported 00:18:21.367 Oversized SGL: Not Supported 00:18:21.367 SGL Metadata Address: Not Supported 00:18:21.367 SGL Offset: Not Supported 00:18:21.367 Transport SGL Data Block: Not Supported 00:18:21.367 Replay Protected Memory Block: Not Supported 00:18:21.367 00:18:21.367 Firmware Slot Information 00:18:21.367 ========================= 00:18:21.367 Active slot: 1 00:18:21.367 Slot 1 Firmware Revision: 25.01 00:18:21.367 00:18:21.367 00:18:21.367 Commands Supported and Effects 00:18:21.367 ============================== 00:18:21.367 Admin Commands 00:18:21.367 -------------- 00:18:21.367 Get Log Page (02h): Supported 00:18:21.367 Identify (06h): Supported 00:18:21.367 Abort (08h): Supported 00:18:21.367 Set Features (09h): Supported 00:18:21.367 Get Features (0Ah): Supported 00:18:21.367 Asynchronous Event Request (0Ch): Supported 00:18:21.367 Keep Alive (18h): Supported 00:18:21.367 I/O Commands 00:18:21.367 ------------ 00:18:21.367 Flush (00h): Supported LBA-Change 00:18:21.367 Write (01h): Supported LBA-Change 00:18:21.367 Read (02h): Supported 00:18:21.367 Compare (05h): Supported 00:18:21.367 Write Zeroes (08h): Supported LBA-Change 00:18:21.367 Dataset Management (09h): Supported LBA-Change 00:18:21.367 Copy (19h): Supported LBA-Change 00:18:21.367 00:18:21.367 Error Log 00:18:21.367 ========= 00:18:21.367 00:18:21.367 Arbitration 00:18:21.367 =========== 00:18:21.367 Arbitration Burst: 1 00:18:21.367 00:18:21.367 Power Management 00:18:21.367 ================ 00:18:21.367 Number of Power States: 1 00:18:21.367 Current Power State: Power State #0 00:18:21.367 Power State #0: 00:18:21.367 Max Power: 0.00 W 00:18:21.367 Non-Operational State: Operational 00:18:21.367 Entry Latency: Not Reported 00:18:21.367 Exit Latency: Not Reported 00:18:21.367 Relative Read Throughput: 0 00:18:21.367 Relative Read Latency: 0 00:18:21.367 Relative Write Throughput: 0 00:18:21.367 Relative Write Latency: 0 00:18:21.367 Idle Power: Not Reported 00:18:21.367 Active Power: Not Reported 00:18:21.367 Non-Operational Permissive Mode: Not Supported 00:18:21.367 00:18:21.367 Health Information 00:18:21.367 ================== 00:18:21.367 Critical Warnings: 00:18:21.367 Available Spare Space: OK 00:18:21.367 Temperature: OK 00:18:21.367 Device Reliability: OK 00:18:21.367 Read Only: No 00:18:21.367 Volatile Memory Backup: OK 00:18:21.367 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:21.367 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:21.367 Available Spare: 0% 00:18:21.367 Available Sp[2024-11-18 20:19:33.123861] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:21.367 [2024-11-18 20:19:33.123879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:21.367 [2024-11-18 20:19:33.123940] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:21.367 [2024-11-18 20:19:33.123959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.367 [2024-11-18 20:19:33.123970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.367 [2024-11-18 20:19:33.123980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.367 [2024-11-18 20:19:33.124005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.367 [2024-11-18 20:19:33.124340] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:21.367 [2024-11-18 20:19:33.124360] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:21.367 [2024-11-18 20:19:33.125339] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:21.367 [2024-11-18 20:19:33.125432] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:21.367 [2024-11-18 20:19:33.125446] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:21.367 [2024-11-18 20:19:33.126348] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:21.367 [2024-11-18 20:19:33.126370] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:21.367 [2024-11-18 20:19:33.126424] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:21.367 [2024-11-18 20:19:33.129648] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:21.367 are Threshold: 0% 00:18:21.367 Life Percentage Used: 0% 00:18:21.367 Data Units Read: 0 00:18:21.367 Data Units Written: 0 00:18:21.367 Host Read Commands: 0 00:18:21.367 Host Write Commands: 0 00:18:21.367 Controller Busy Time: 0 minutes 00:18:21.367 Power Cycles: 0 00:18:21.367 Power On Hours: 0 hours 00:18:21.367 Unsafe Shutdowns: 0 00:18:21.367 Unrecoverable Media Errors: 0 00:18:21.367 Lifetime Error Log Entries: 0 00:18:21.367 Warning Temperature Time: 0 minutes 00:18:21.367 Critical Temperature Time: 0 minutes 00:18:21.367 00:18:21.367 Number of Queues 00:18:21.367 ================ 00:18:21.367 Number of I/O Submission Queues: 127 00:18:21.367 Number of I/O Completion Queues: 127 00:18:21.367 00:18:21.367 Active Namespaces 00:18:21.367 ================= 00:18:21.367 Namespace ID:1 00:18:21.367 Error Recovery Timeout: Unlimited 00:18:21.367 Command Set Identifier: NVM (00h) 00:18:21.367 Deallocate: Supported 00:18:21.367 Deallocated/Unwritten Error: Not Supported 00:18:21.367 Deallocated Read Value: Unknown 00:18:21.367 Deallocate in Write Zeroes: Not Supported 00:18:21.367 Deallocated Guard Field: 0xFFFF 00:18:21.367 Flush: Supported 00:18:21.367 Reservation: Supported 00:18:21.367 Namespace Sharing Capabilities: Multiple Controllers 00:18:21.367 Size (in LBAs): 131072 (0GiB) 00:18:21.367 Capacity (in LBAs): 131072 (0GiB) 00:18:21.368 Utilization (in LBAs): 131072 (0GiB) 00:18:21.368 NGUID: CAFC6A1E944F47F3992B78A6B9E2E087 00:18:21.368 UUID: cafc6a1e-944f-47f3-992b-78a6b9e2e087 00:18:21.368 Thin Provisioning: Not Supported 00:18:21.368 Per-NS Atomic Units: Yes 00:18:21.368 Atomic Boundary Size (Normal): 0 00:18:21.368 Atomic Boundary Size (PFail): 0 00:18:21.368 Atomic Boundary Offset: 0 00:18:21.368 Maximum Single Source Range Length: 65535 00:18:21.368 Maximum Copy Length: 65535 00:18:21.368 Maximum Source Range Count: 1 00:18:21.368 NGUID/EUI64 Never Reused: No 00:18:21.368 Namespace Write Protected: No 00:18:21.368 Number of LBA Formats: 1 00:18:21.368 Current LBA Format: LBA Format #00 00:18:21.368 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:21.368 00:18:21.368 20:19:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:21.627 [2024-11-18 20:19:33.380546] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:26.906 Initializing NVMe Controllers 00:18:26.906 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:26.906 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:26.906 Initialization complete. Launching workers. 00:18:26.906 ======================================================== 00:18:26.906 Latency(us) 00:18:26.906 Device Information : IOPS MiB/s Average min max 00:18:26.906 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34363.59 134.23 3724.61 1161.35 8122.88 00:18:26.906 ======================================================== 00:18:26.906 Total : 34363.59 134.23 3724.61 1161.35 8122.88 00:18:26.906 00:18:26.906 [2024-11-18 20:19:38.402756] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:26.906 20:19:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:26.906 [2024-11-18 20:19:38.646893] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:32.188 Initializing NVMe Controllers 00:18:32.188 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:32.188 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:32.188 Initialization complete. Launching workers. 00:18:32.188 ======================================================== 00:18:32.188 Latency(us) 00:18:32.188 Device Information : IOPS MiB/s Average min max 00:18:32.188 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7997.21 6985.50 14983.09 00:18:32.188 ======================================================== 00:18:32.188 Total : 16025.60 62.60 7997.21 6985.50 14983.09 00:18:32.188 00:18:32.188 [2024-11-18 20:19:43.685523] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:32.188 20:19:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:32.188 [2024-11-18 20:19:43.921715] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:37.463 [2024-11-18 20:19:48.995976] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:37.463 Initializing NVMe Controllers 00:18:37.463 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:37.463 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:37.463 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:37.463 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:37.463 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:37.463 Initialization complete. Launching workers. 00:18:37.463 Starting thread on core 2 00:18:37.463 Starting thread on core 3 00:18:37.463 Starting thread on core 1 00:18:37.463 20:19:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:37.463 [2024-11-18 20:19:49.335209] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:40.762 [2024-11-18 20:19:52.397238] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:40.762 Initializing NVMe Controllers 00:18:40.762 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:40.762 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:40.762 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:40.762 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:40.762 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:40.762 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:40.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:40.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:40.762 Initialization complete. Launching workers. 00:18:40.762 Starting thread on core 1 with urgent priority queue 00:18:40.762 Starting thread on core 2 with urgent priority queue 00:18:40.762 Starting thread on core 3 with urgent priority queue 00:18:40.762 Starting thread on core 0 with urgent priority queue 00:18:40.762 SPDK bdev Controller (SPDK1 ) core 0: 5018.00 IO/s 19.93 secs/100000 ios 00:18:40.762 SPDK bdev Controller (SPDK1 ) core 1: 4250.33 IO/s 23.53 secs/100000 ios 00:18:40.762 SPDK bdev Controller (SPDK1 ) core 2: 4893.33 IO/s 20.44 secs/100000 ios 00:18:40.762 SPDK bdev Controller (SPDK1 ) core 3: 4966.67 IO/s 20.13 secs/100000 ios 00:18:40.762 ======================================================== 00:18:40.762 00:18:40.762 20:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:40.762 [2024-11-18 20:19:52.706147] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:40.762 Initializing NVMe Controllers 00:18:40.762 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:40.762 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:40.762 Namespace ID: 1 size: 0GB 00:18:40.762 Initialization complete. 00:18:40.762 INFO: using host memory buffer for IO 00:18:40.762 Hello world! 00:18:40.762 [2024-11-18 20:19:52.740714] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:41.023 20:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:41.284 [2024-11-18 20:19:53.040114] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:42.222 Initializing NVMe Controllers 00:18:42.222 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:42.222 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:42.222 Initialization complete. Launching workers. 00:18:42.222 submit (in ns) avg, min, max = 6446.2, 3513.3, 4016171.1 00:18:42.222 complete (in ns) avg, min, max = 26775.2, 2074.4, 4015834.4 00:18:42.222 00:18:42.222 Submit histogram 00:18:42.222 ================ 00:18:42.222 Range in us Cumulative Count 00:18:42.222 3.508 - 3.532: 0.2793% ( 36) 00:18:42.222 3.532 - 3.556: 0.8845% ( 78) 00:18:42.222 3.556 - 3.579: 3.7784% ( 373) 00:18:42.222 3.579 - 3.603: 8.0767% ( 554) 00:18:42.222 3.603 - 3.627: 16.1300% ( 1038) 00:18:42.222 3.627 - 3.650: 25.4015% ( 1195) 00:18:42.222 3.650 - 3.674: 33.7807% ( 1080) 00:18:42.222 3.674 - 3.698: 41.1902% ( 955) 00:18:42.222 3.698 - 3.721: 48.7470% ( 974) 00:18:42.222 3.721 - 3.745: 53.8521% ( 658) 00:18:42.222 3.745 - 3.769: 58.3055% ( 574) 00:18:42.222 3.769 - 3.793: 62.0219% ( 479) 00:18:42.222 3.793 - 3.816: 65.3115% ( 424) 00:18:42.222 3.816 - 3.840: 69.0589% ( 483) 00:18:42.222 3.840 - 3.864: 73.4192% ( 562) 00:18:42.222 3.864 - 3.887: 77.9192% ( 580) 00:18:42.222 3.887 - 3.911: 81.5191% ( 464) 00:18:42.222 3.911 - 3.935: 84.7855% ( 421) 00:18:42.222 3.935 - 3.959: 86.9268% ( 276) 00:18:42.222 3.959 - 3.982: 88.7966% ( 241) 00:18:42.222 3.982 - 4.006: 90.3328% ( 198) 00:18:42.222 4.006 - 4.030: 91.4268% ( 141) 00:18:42.222 4.030 - 4.053: 92.4820% ( 136) 00:18:42.222 4.053 - 4.077: 93.4518% ( 125) 00:18:42.222 4.077 - 4.101: 94.2354% ( 101) 00:18:42.222 4.101 - 4.124: 94.8406% ( 78) 00:18:42.222 4.124 - 4.148: 95.3449% ( 65) 00:18:42.222 4.148 - 4.172: 95.7018% ( 46) 00:18:42.222 4.172 - 4.196: 95.9966% ( 38) 00:18:42.222 4.196 - 4.219: 96.2138% ( 28) 00:18:42.222 4.219 - 4.243: 96.4311% ( 28) 00:18:42.222 4.243 - 4.267: 96.5707% ( 18) 00:18:42.222 4.267 - 4.290: 96.6793% ( 14) 00:18:42.222 4.290 - 4.314: 96.7647% ( 11) 00:18:42.222 4.314 - 4.338: 96.8423% ( 10) 00:18:42.222 4.338 - 4.361: 96.9043% ( 8) 00:18:42.222 4.361 - 4.385: 96.9509% ( 6) 00:18:42.222 4.385 - 4.409: 97.0285% ( 10) 00:18:42.222 4.409 - 4.433: 97.0983% ( 9) 00:18:42.222 4.433 - 4.456: 97.1604% ( 8) 00:18:42.222 4.456 - 4.480: 97.1759% ( 2) 00:18:42.222 4.480 - 4.504: 97.1992% ( 3) 00:18:42.222 4.504 - 4.527: 97.2147% ( 2) 00:18:42.222 4.527 - 4.551: 97.2224% ( 1) 00:18:42.222 4.551 - 4.575: 97.2535% ( 4) 00:18:42.222 4.575 - 4.599: 97.2690% ( 2) 00:18:42.222 4.599 - 4.622: 97.2767% ( 1) 00:18:42.222 4.622 - 4.646: 97.3078% ( 4) 00:18:42.222 4.646 - 4.670: 97.3621% ( 7) 00:18:42.222 4.670 - 4.693: 97.4009% ( 5) 00:18:42.222 4.693 - 4.717: 97.4552% ( 7) 00:18:42.222 4.717 - 4.741: 97.5716% ( 15) 00:18:42.222 4.741 - 4.764: 97.6492% ( 10) 00:18:42.222 4.764 - 4.788: 97.6880% ( 5) 00:18:42.222 4.788 - 4.812: 97.7190% ( 4) 00:18:42.222 4.812 - 4.836: 97.7733% ( 7) 00:18:42.222 4.836 - 4.859: 97.8509% ( 10) 00:18:42.222 4.859 - 4.883: 97.8819% ( 4) 00:18:42.222 4.883 - 4.907: 97.9207% ( 5) 00:18:42.222 4.907 - 4.930: 97.9362% ( 2) 00:18:42.222 4.930 - 4.954: 97.9750% ( 5) 00:18:42.222 4.954 - 4.978: 97.9828% ( 1) 00:18:42.222 4.978 - 5.001: 98.0371% ( 7) 00:18:42.222 5.001 - 5.025: 98.0681% ( 4) 00:18:42.222 5.025 - 5.049: 98.0836% ( 2) 00:18:42.222 5.049 - 5.073: 98.1069% ( 3) 00:18:42.222 5.073 - 5.096: 98.1147% ( 1) 00:18:42.222 5.096 - 5.120: 98.1224% ( 1) 00:18:42.222 5.120 - 5.144: 98.1379% ( 2) 00:18:42.222 5.167 - 5.191: 98.1535% ( 2) 00:18:42.222 5.191 - 5.215: 98.1690% ( 2) 00:18:42.222 5.215 - 5.239: 98.1767% ( 1) 00:18:42.222 5.239 - 5.262: 98.1845% ( 1) 00:18:42.222 5.286 - 5.310: 98.1923% ( 1) 00:18:42.222 5.310 - 5.333: 98.2000% ( 1) 00:18:42.222 5.476 - 5.499: 98.2078% ( 1) 00:18:42.222 5.547 - 5.570: 98.2155% ( 1) 00:18:42.222 5.784 - 5.807: 98.2233% ( 1) 00:18:42.222 5.926 - 5.950: 98.2310% ( 1) 00:18:42.222 6.044 - 6.068: 98.2388% ( 1) 00:18:42.222 6.116 - 6.163: 98.2466% ( 1) 00:18:42.222 6.163 - 6.210: 98.2543% ( 1) 00:18:42.222 6.779 - 6.827: 98.2621% ( 1) 00:18:42.222 6.827 - 6.874: 98.2698% ( 1) 00:18:42.222 6.874 - 6.921: 98.2854% ( 2) 00:18:42.222 6.921 - 6.969: 98.2931% ( 1) 00:18:42.222 6.969 - 7.016: 98.3009% ( 1) 00:18:42.222 7.159 - 7.206: 98.3164% ( 2) 00:18:42.222 7.301 - 7.348: 98.3242% ( 1) 00:18:42.222 7.348 - 7.396: 98.3397% ( 2) 00:18:42.222 7.396 - 7.443: 98.3474% ( 1) 00:18:42.222 7.538 - 7.585: 98.3552% ( 1) 00:18:42.222 7.633 - 7.680: 98.3629% ( 1) 00:18:42.222 7.680 - 7.727: 98.3707% ( 1) 00:18:42.222 7.727 - 7.775: 98.3785% ( 1) 00:18:42.222 7.775 - 7.822: 98.3862% ( 1) 00:18:42.222 7.917 - 7.964: 98.4017% ( 2) 00:18:42.222 7.964 - 8.012: 98.4095% ( 1) 00:18:42.222 8.012 - 8.059: 98.4250% ( 2) 00:18:42.222 8.107 - 8.154: 98.4405% ( 2) 00:18:42.222 8.154 - 8.201: 98.4483% ( 1) 00:18:42.222 8.201 - 8.249: 98.4638% ( 2) 00:18:42.222 8.249 - 8.296: 98.4871% ( 3) 00:18:42.222 8.296 - 8.344: 98.4948% ( 1) 00:18:42.222 8.344 - 8.391: 98.5104% ( 2) 00:18:42.222 8.391 - 8.439: 98.5181% ( 1) 00:18:42.222 8.439 - 8.486: 98.5414% ( 3) 00:18:42.222 8.581 - 8.628: 98.5492% ( 1) 00:18:42.222 8.676 - 8.723: 98.5647% ( 2) 00:18:42.222 8.723 - 8.770: 98.5802% ( 2) 00:18:42.222 8.865 - 8.913: 98.5879% ( 1) 00:18:42.222 8.913 - 8.960: 98.5957% ( 1) 00:18:42.222 9.055 - 9.102: 98.6035% ( 1) 00:18:42.222 9.197 - 9.244: 98.6267% ( 3) 00:18:42.222 9.244 - 9.292: 98.6500% ( 3) 00:18:42.222 9.292 - 9.339: 98.6578% ( 1) 00:18:42.222 9.529 - 9.576: 98.6655% ( 1) 00:18:42.222 9.576 - 9.624: 98.6733% ( 1) 00:18:42.222 9.671 - 9.719: 98.6810% ( 1) 00:18:42.222 9.766 - 9.813: 98.6888% ( 1) 00:18:42.222 10.098 - 10.145: 98.7043% ( 2) 00:18:42.222 10.145 - 10.193: 98.7121% ( 1) 00:18:42.222 10.193 - 10.240: 98.7198% ( 1) 00:18:42.222 10.287 - 10.335: 98.7276% ( 1) 00:18:42.222 10.382 - 10.430: 98.7354% ( 1) 00:18:42.222 10.430 - 10.477: 98.7431% ( 1) 00:18:42.222 10.524 - 10.572: 98.7509% ( 1) 00:18:42.222 10.667 - 10.714: 98.7586% ( 1) 00:18:42.222 10.714 - 10.761: 98.7664% ( 1) 00:18:42.222 10.856 - 10.904: 98.7741% ( 1) 00:18:42.222 10.904 - 10.951: 98.7819% ( 1) 00:18:42.222 11.804 - 11.852: 98.7897% ( 1) 00:18:42.222 12.326 - 12.421: 98.7974% ( 1) 00:18:42.222 12.421 - 12.516: 98.8052% ( 1) 00:18:42.222 12.516 - 12.610: 98.8207% ( 2) 00:18:42.222 13.179 - 13.274: 98.8285% ( 1) 00:18:42.222 13.274 - 13.369: 98.8362% ( 1) 00:18:42.222 13.369 - 13.464: 98.8440% ( 1) 00:18:42.222 13.464 - 13.559: 98.8517% ( 1) 00:18:42.222 13.748 - 13.843: 98.8595% ( 1) 00:18:42.222 13.843 - 13.938: 98.8673% ( 1) 00:18:42.222 13.938 - 14.033: 98.8750% ( 1) 00:18:42.222 14.317 - 14.412: 98.8828% ( 1) 00:18:42.222 14.412 - 14.507: 98.8905% ( 1) 00:18:42.222 14.601 - 14.696: 98.9060% ( 2) 00:18:42.222 14.886 - 14.981: 98.9138% ( 1) 00:18:42.222 15.360 - 15.455: 98.9216% ( 1) 00:18:42.222 17.161 - 17.256: 98.9371% ( 2) 00:18:42.222 17.256 - 17.351: 98.9448% ( 1) 00:18:42.222 17.351 - 17.446: 98.9681% ( 3) 00:18:42.223 17.446 - 17.541: 99.0147% ( 6) 00:18:42.223 17.541 - 17.636: 99.0379% ( 3) 00:18:42.223 17.636 - 17.730: 99.0922% ( 7) 00:18:42.223 17.730 - 17.825: 99.1388% ( 6) 00:18:42.223 17.825 - 17.920: 99.2009% ( 8) 00:18:42.223 17.920 - 18.015: 99.2707% ( 9) 00:18:42.223 18.015 - 18.110: 99.3250% ( 7) 00:18:42.223 18.110 - 18.204: 99.3948% ( 9) 00:18:42.223 18.204 - 18.299: 99.4259% ( 4) 00:18:42.223 18.299 - 18.394: 99.4724% ( 6) 00:18:42.223 18.394 - 18.489: 99.5888% ( 15) 00:18:42.223 18.489 - 18.584: 99.6664% ( 10) 00:18:42.223 18.584 - 18.679: 99.7285% ( 8) 00:18:42.223 18.679 - 18.773: 99.7440% ( 2) 00:18:42.223 18.773 - 18.868: 99.7828% ( 5) 00:18:42.223 18.868 - 18.963: 99.7983% ( 2) 00:18:42.223 18.963 - 19.058: 99.8138% ( 2) 00:18:42.223 19.058 - 19.153: 99.8371% ( 3) 00:18:42.223 19.153 - 19.247: 99.8603% ( 3) 00:18:42.223 19.247 - 19.342: 99.8681% ( 1) 00:18:42.223 19.342 - 19.437: 99.8836% ( 2) 00:18:42.223 19.911 - 20.006: 99.8914% ( 1) 00:18:42.223 20.764 - 20.859: 99.8991% ( 1) 00:18:42.223 21.523 - 21.618: 99.9069% ( 1) 00:18:42.223 22.471 - 22.566: 99.9147% ( 1) 00:18:42.223 22.661 - 22.756: 99.9224% ( 1) 00:18:42.223 22.756 - 22.850: 99.9302% ( 1) 00:18:42.223 24.652 - 24.841: 99.9379% ( 1) 00:18:42.223 3980.705 - 4004.978: 99.9922% ( 7) 00:18:42.223 4004.978 - 4029.250: 100.0000% ( 1) 00:18:42.223 00:18:42.223 Complete histogram 00:18:42.223 ================== 00:18:42.223 Range in us Cumulative Count 00:18:42.223 2.074 - 2.086: 2.3431% ( 302) 00:18:42.223 2.086 - 2.098: 32.0196% ( 3825) 00:18:42.223 2.098 - 2.110: 44.0143% ( 1546) 00:18:42.223 2.110 - 2.121: 48.7625% ( 612) 00:18:42.223 2.121 - 2.133: 58.1116% ( 1205) 00:18:42.223 2.133 - 2.145: 60.4081% ( 296) 00:18:42.223 2.145 - 2.157: 65.2106% ( 619) 00:18:42.223 2.157 - 2.169: 75.4907% ( 1325) 00:18:42.223 2.169 - 2.181: 76.8950% ( 181) 00:18:42.223 2.181 - 2.193: 78.9278% ( 262) 00:18:42.223 2.193 - 2.204: 81.5114% ( 333) 00:18:42.223 2.204 - 2.216: 82.0467% ( 69) 00:18:42.223 2.216 - 2.228: 83.9398% ( 244) 00:18:42.223 2.228 - 2.240: 88.1760% ( 546) 00:18:42.223 2.240 - 2.252: 90.6199% ( 315) 00:18:42.223 2.252 - 2.264: 92.1639% ( 199) 00:18:42.223 2.264 - 2.276: 93.2656% ( 142) 00:18:42.223 2.276 - 2.287: 93.7001% ( 56) 00:18:42.223 2.287 - 2.299: 93.9871% ( 37) 00:18:42.223 2.299 - 2.311: 94.2975% ( 40) 00:18:42.223 2.311 - 2.323: 94.9414% ( 83) 00:18:42.223 2.323 - 2.335: 95.2828% ( 44) 00:18:42.223 2.335 - 2.347: 95.3914% ( 14) 00:18:42.223 2.347 - 2.359: 95.4535% ( 8) 00:18:42.223 2.359 - 2.370: 95.5000% ( 6) 00:18:42.223 2.370 - 2.382: 95.5156% ( 2) 00:18:42.223 2.382 - 2.394: 95.5776% ( 8) 00:18:42.223 2.394 - 2.406: 95.9190% ( 44) 00:18:42.223 2.406 - 2.418: 96.1130% ( 25) 00:18:42.223 2.418 - 2.430: 96.4233% ( 40) 00:18:42.223 2.430 - 2.441: 96.6638% ( 31) 00:18:42.223 2.441 - 2.453: 96.8733% ( 27) 00:18:42.223 2.453 - 2.465: 97.0440% ( 22) 00:18:42.223 2.465 - 2.477: 97.1604% ( 15) 00:18:42.223 2.477 - 2.489: 97.3000% ( 18) 00:18:42.223 2.489 - 2.501: 97.4862% ( 24) 00:18:42.223 2.501 - 2.513: 97.6569% ( 22) 00:18:42.223 2.513 - 2.524: 97.7811% ( 16) 00:18:42.223 2.524 - 2.536: 97.8509% ( 9) 00:18:42.223 2.536 - 2.548: 97.9129% ( 8) 00:18:42.223 2.548 - 2.560: 97.9673% ( 7) 00:18:42.223 2.560 - 2.572: 97.9750% ( 1) 00:18:42.223 2.572 - 2.584: 97.9983% ( 3) 00:18:42.223 2.584 - 2.596: 98.0371% ( 5) 00:18:42.223 2.596 - 2.607: 98.0604% ( 3) 00:18:42.223 2.607 - 2.619: 98.0759% ( 2) 00:18:42.223 2.619 - 2.631: 98.0836% ( 1) 00:18:42.223 2.643 - 2.655: 98.0914% ( 1) 00:18:42.223 2.655 - 2.667: 98.0992% ( 1) 00:18:42.223 2.667 - 2.679: 98.1224% ( 3) 00:18:42.223 2.679 - 2.690: 98.1302% ( 1) 00:18:42.223 2.702 - 2.714: 98.1612% ( 4) 00:18:42.223 2.714 - 2.726: 98.1690% ( 1) 00:18:42.223 2.726 - 2.738: 98.1767% ( 1) 00:18:42.223 2.738 - 2.750: 98.1845% ( 1) 00:18:42.223 2.750 - 2.761: 98.1923% ( 1) 00:18:42.223 2.761 - 2.773: 98.2078% ( 2) 00:18:42.223 2.773 - 2.785: 98.2233% ( 2) 00:18:42.223 2.821 - 2.833: 98.2388% ( 2) 00:18:42.223 2.833 - 2.844: 98.2466% ( 1) 00:18:42.223 2.844 - 2.856: 98.2621% ( 2) 00:18:42.223 2.856 - 2.868: 98.2698% ( 1) 00:18:42.223 2.868 - 2.880: 98.2854% ( 2) 00:18:42.223 2.939 - 2.951: 98.2931% ( 1) 00:18:42.223 3.034 - 3.058: 98.3009% ( 1) 00:18:42.223 3.176 - 3.200: 98.3086% ( 1) 00:18:42.223 3.224 - 3.247: 98.3242% ( 2) 00:18:42.223 3.295 - 3.319: 98.3397% ( 2) 00:18:42.223 3.319 - 3.342: 98.3552% ( 2) 00:18:42.223 3.342 - 3.366: 98.3707% ( 2) 00:18:42.223 3.366 - 3.390: 98.3862% ( 2) 00:18:42.223 3.390 - 3.413: 9[2024-11-18 20:19:54.062409] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:42.223 8.4017% ( 2) 00:18:42.223 3.413 - 3.437: 98.4173% ( 2) 00:18:42.223 3.437 - 3.461: 98.4483% ( 4) 00:18:42.223 3.532 - 3.556: 98.4560% ( 1) 00:18:42.223 3.556 - 3.579: 98.4871% ( 4) 00:18:42.223 3.579 - 3.603: 98.4948% ( 1) 00:18:42.223 3.674 - 3.698: 98.5026% ( 1) 00:18:42.223 3.698 - 3.721: 98.5104% ( 1) 00:18:42.223 3.769 - 3.793: 98.5181% ( 1) 00:18:42.223 3.793 - 3.816: 98.5259% ( 1) 00:18:42.223 3.816 - 3.840: 98.5336% ( 1) 00:18:42.223 3.864 - 3.887: 98.5414% ( 1) 00:18:42.223 3.887 - 3.911: 98.5492% ( 1) 00:18:42.223 4.006 - 4.030: 98.5569% ( 1) 00:18:42.223 4.077 - 4.101: 98.5647% ( 1) 00:18:42.223 4.148 - 4.172: 98.5724% ( 1) 00:18:42.223 5.404 - 5.428: 98.5802% ( 1) 00:18:42.223 5.594 - 5.618: 98.5879% ( 1) 00:18:42.223 5.665 - 5.689: 98.5957% ( 1) 00:18:42.223 5.689 - 5.713: 98.6035% ( 1) 00:18:42.223 5.760 - 5.784: 98.6112% ( 1) 00:18:42.223 6.044 - 6.068: 98.6190% ( 1) 00:18:42.223 6.163 - 6.210: 98.6267% ( 1) 00:18:42.223 6.305 - 6.353: 98.6345% ( 1) 00:18:42.223 6.400 - 6.447: 98.6500% ( 2) 00:18:42.223 6.637 - 6.684: 98.6578% ( 1) 00:18:42.223 6.732 - 6.779: 98.6733% ( 2) 00:18:42.223 6.779 - 6.827: 98.6810% ( 1) 00:18:42.223 6.921 - 6.969: 98.6888% ( 1) 00:18:42.223 6.969 - 7.016: 98.6966% ( 1) 00:18:42.223 7.064 - 7.111: 98.7121% ( 2) 00:18:42.223 8.107 - 8.154: 98.7198% ( 1) 00:18:42.223 9.813 - 9.861: 98.7276% ( 1) 00:18:42.223 15.360 - 15.455: 98.7354% ( 1) 00:18:42.223 15.455 - 15.550: 98.7431% ( 1) 00:18:42.223 15.550 - 15.644: 98.7509% ( 1) 00:18:42.223 15.644 - 15.739: 98.7586% ( 1) 00:18:42.223 15.739 - 15.834: 98.7664% ( 1) 00:18:42.223 15.834 - 15.929: 98.7741% ( 1) 00:18:42.223 15.929 - 16.024: 98.7819% ( 1) 00:18:42.223 16.024 - 16.119: 98.8207% ( 5) 00:18:42.223 16.119 - 16.213: 98.8517% ( 4) 00:18:42.223 16.213 - 16.308: 98.8983% ( 6) 00:18:42.223 16.308 - 16.403: 98.9293% ( 4) 00:18:42.223 16.403 - 16.498: 98.9681% ( 5) 00:18:42.223 16.498 - 16.593: 99.0379% ( 9) 00:18:42.223 16.593 - 16.687: 99.0922% ( 7) 00:18:42.223 16.687 - 16.782: 99.1621% ( 9) 00:18:42.223 16.782 - 16.877: 99.2009% ( 5) 00:18:42.223 16.877 - 16.972: 99.2397% ( 5) 00:18:42.223 16.972 - 17.067: 99.2629% ( 3) 00:18:42.223 17.067 - 17.161: 99.3017% ( 5) 00:18:42.223 17.161 - 17.256: 99.3095% ( 1) 00:18:42.223 17.256 - 17.351: 99.3172% ( 1) 00:18:42.223 17.351 - 17.446: 99.3250% ( 1) 00:18:42.223 17.446 - 17.541: 99.3328% ( 1) 00:18:42.223 17.541 - 17.636: 99.3405% ( 1) 00:18:42.223 17.730 - 17.825: 99.3483% ( 1) 00:18:42.223 18.299 - 18.394: 99.3560% ( 1) 00:18:42.223 19.437 - 19.532: 99.3638% ( 1) 00:18:42.223 19.721 - 19.816: 99.3716% ( 1) 00:18:42.223 47.597 - 47.787: 99.3793% ( 1) 00:18:42.223 168.391 - 169.150: 99.3871% ( 1) 00:18:42.223 3980.705 - 4004.978: 99.9612% ( 74) 00:18:42.223 4004.978 - 4029.250: 100.0000% ( 5) 00:18:42.223 00:18:42.223 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:42.223 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:42.223 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:42.223 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:42.224 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:42.485 [ 00:18:42.485 { 00:18:42.485 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:42.485 "subtype": "Discovery", 00:18:42.485 "listen_addresses": [], 00:18:42.485 "allow_any_host": true, 00:18:42.485 "hosts": [] 00:18:42.485 }, 00:18:42.485 { 00:18:42.485 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:42.485 "subtype": "NVMe", 00:18:42.485 "listen_addresses": [ 00:18:42.485 { 00:18:42.485 "trtype": "VFIOUSER", 00:18:42.485 "adrfam": "IPv4", 00:18:42.485 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:42.485 "trsvcid": "0" 00:18:42.485 } 00:18:42.485 ], 00:18:42.485 "allow_any_host": true, 00:18:42.485 "hosts": [], 00:18:42.485 "serial_number": "SPDK1", 00:18:42.485 "model_number": "SPDK bdev Controller", 00:18:42.485 "max_namespaces": 32, 00:18:42.485 "min_cntlid": 1, 00:18:42.485 "max_cntlid": 65519, 00:18:42.485 "namespaces": [ 00:18:42.485 { 00:18:42.485 "nsid": 1, 00:18:42.485 "bdev_name": "Malloc1", 00:18:42.485 "name": "Malloc1", 00:18:42.485 "nguid": "CAFC6A1E944F47F3992B78A6B9E2E087", 00:18:42.485 "uuid": "cafc6a1e-944f-47f3-992b-78a6b9e2e087" 00:18:42.485 } 00:18:42.485 ] 00:18:42.485 }, 00:18:42.485 { 00:18:42.485 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:42.485 "subtype": "NVMe", 00:18:42.485 "listen_addresses": [ 00:18:42.485 { 00:18:42.485 "trtype": "VFIOUSER", 00:18:42.485 "adrfam": "IPv4", 00:18:42.485 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:42.485 "trsvcid": "0" 00:18:42.485 } 00:18:42.485 ], 00:18:42.485 "allow_any_host": true, 00:18:42.485 "hosts": [], 00:18:42.485 "serial_number": "SPDK2", 00:18:42.485 "model_number": "SPDK bdev Controller", 00:18:42.485 "max_namespaces": 32, 00:18:42.485 "min_cntlid": 1, 00:18:42.485 "max_cntlid": 65519, 00:18:42.485 "namespaces": [ 00:18:42.485 { 00:18:42.485 "nsid": 1, 00:18:42.485 "bdev_name": "Malloc2", 00:18:42.485 "name": "Malloc2", 00:18:42.485 "nguid": "0FEDE46855364226898231D74FAA6FF5", 00:18:42.485 "uuid": "0fede468-5536-4226-8982-31d74faa6ff5" 00:18:42.485 } 00:18:42.485 ] 00:18:42.485 } 00:18:42.485 ] 00:18:42.485 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:42.485 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=231369 00:18:42.485 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:42.485 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:42.485 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:42.485 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:42.485 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:42.485 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:42.485 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:42.485 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:42.485 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:42.485 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:42.485 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:42.744 [2024-11-18 20:19:54.553183] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:42.744 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:42.744 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:42.744 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:42.744 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:42.744 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:43.003 Malloc3 00:18:43.003 20:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:43.262 [2024-11-18 20:19:55.152591] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:43.262 20:19:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:43.262 Asynchronous Event Request test 00:18:43.262 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:43.262 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:43.262 Registering asynchronous event callbacks... 00:18:43.262 Starting namespace attribute notice tests for all controllers... 00:18:43.262 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:43.262 aer_cb - Changed Namespace 00:18:43.262 Cleaning up... 00:18:43.521 [ 00:18:43.521 { 00:18:43.521 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:43.521 "subtype": "Discovery", 00:18:43.521 "listen_addresses": [], 00:18:43.521 "allow_any_host": true, 00:18:43.521 "hosts": [] 00:18:43.521 }, 00:18:43.521 { 00:18:43.521 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:43.521 "subtype": "NVMe", 00:18:43.521 "listen_addresses": [ 00:18:43.521 { 00:18:43.521 "trtype": "VFIOUSER", 00:18:43.521 "adrfam": "IPv4", 00:18:43.521 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:43.521 "trsvcid": "0" 00:18:43.521 } 00:18:43.521 ], 00:18:43.521 "allow_any_host": true, 00:18:43.521 "hosts": [], 00:18:43.521 "serial_number": "SPDK1", 00:18:43.521 "model_number": "SPDK bdev Controller", 00:18:43.521 "max_namespaces": 32, 00:18:43.521 "min_cntlid": 1, 00:18:43.521 "max_cntlid": 65519, 00:18:43.521 "namespaces": [ 00:18:43.521 { 00:18:43.521 "nsid": 1, 00:18:43.521 "bdev_name": "Malloc1", 00:18:43.521 "name": "Malloc1", 00:18:43.521 "nguid": "CAFC6A1E944F47F3992B78A6B9E2E087", 00:18:43.521 "uuid": "cafc6a1e-944f-47f3-992b-78a6b9e2e087" 00:18:43.521 }, 00:18:43.521 { 00:18:43.521 "nsid": 2, 00:18:43.521 "bdev_name": "Malloc3", 00:18:43.521 "name": "Malloc3", 00:18:43.521 "nguid": "3A7625C705734EB69530896D15E7BF0B", 00:18:43.521 "uuid": "3a7625c7-0573-4eb6-9530-896d15e7bf0b" 00:18:43.521 } 00:18:43.521 ] 00:18:43.521 }, 00:18:43.521 { 00:18:43.521 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:43.521 "subtype": "NVMe", 00:18:43.521 "listen_addresses": [ 00:18:43.521 { 00:18:43.521 "trtype": "VFIOUSER", 00:18:43.521 "adrfam": "IPv4", 00:18:43.521 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:43.521 "trsvcid": "0" 00:18:43.521 } 00:18:43.521 ], 00:18:43.521 "allow_any_host": true, 00:18:43.521 "hosts": [], 00:18:43.521 "serial_number": "SPDK2", 00:18:43.521 "model_number": "SPDK bdev Controller", 00:18:43.521 "max_namespaces": 32, 00:18:43.521 "min_cntlid": 1, 00:18:43.521 "max_cntlid": 65519, 00:18:43.521 "namespaces": [ 00:18:43.521 { 00:18:43.521 "nsid": 1, 00:18:43.521 "bdev_name": "Malloc2", 00:18:43.521 "name": "Malloc2", 00:18:43.521 "nguid": "0FEDE46855364226898231D74FAA6FF5", 00:18:43.521 "uuid": "0fede468-5536-4226-8982-31d74faa6ff5" 00:18:43.521 } 00:18:43.521 ] 00:18:43.521 } 00:18:43.521 ] 00:18:43.521 20:19:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 231369 00:18:43.521 20:19:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:43.521 20:19:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:43.521 20:19:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:43.521 20:19:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:43.521 [2024-11-18 20:19:55.450075] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:18:43.521 [2024-11-18 20:19:55.450111] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231508 ] 00:18:43.521 [2024-11-18 20:19:55.499446] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:43.521 [2024-11-18 20:19:55.504933] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:43.521 [2024-11-18 20:19:55.504963] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f60fb15b000 00:18:43.521 [2024-11-18 20:19:55.505916] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:43.521 [2024-11-18 20:19:55.506927] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:43.521 [2024-11-18 20:19:55.507950] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:43.521 [2024-11-18 20:19:55.509645] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:43.521 [2024-11-18 20:19:55.509961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:43.521 [2024-11-18 20:19:55.510969] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:43.521 [2024-11-18 20:19:55.511985] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:43.521 [2024-11-18 20:19:55.512987] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:43.521 [2024-11-18 20:19:55.513994] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:43.521 [2024-11-18 20:19:55.514015] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f60f9e53000 00:18:43.521 [2024-11-18 20:19:55.515129] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:43.782 [2024-11-18 20:19:55.529380] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:43.782 [2024-11-18 20:19:55.529419] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:43.782 [2024-11-18 20:19:55.534539] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:43.782 [2024-11-18 20:19:55.534599] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:43.782 [2024-11-18 20:19:55.534709] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:43.782 [2024-11-18 20:19:55.534735] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:43.782 [2024-11-18 20:19:55.534746] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:43.782 [2024-11-18 20:19:55.535539] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:43.782 [2024-11-18 20:19:55.535560] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:43.782 [2024-11-18 20:19:55.535572] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:43.782 [2024-11-18 20:19:55.536544] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:43.782 [2024-11-18 20:19:55.536565] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:43.782 [2024-11-18 20:19:55.536579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:43.782 [2024-11-18 20:19:55.537547] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:43.782 [2024-11-18 20:19:55.537567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:43.782 [2024-11-18 20:19:55.538556] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:43.782 [2024-11-18 20:19:55.538575] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:43.782 [2024-11-18 20:19:55.538584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:43.782 [2024-11-18 20:19:55.538595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:43.782 [2024-11-18 20:19:55.538710] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:43.782 [2024-11-18 20:19:55.538722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:43.782 [2024-11-18 20:19:55.538730] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:43.782 [2024-11-18 20:19:55.539560] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:43.782 [2024-11-18 20:19:55.540568] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:43.782 [2024-11-18 20:19:55.541576] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:43.782 [2024-11-18 20:19:55.542570] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:43.783 [2024-11-18 20:19:55.542656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:43.783 [2024-11-18 20:19:55.543591] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:43.783 [2024-11-18 20:19:55.543610] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:43.783 [2024-11-18 20:19:55.543641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.543668] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:43.783 [2024-11-18 20:19:55.543682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.543706] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:43.783 [2024-11-18 20:19:55.543717] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:43.783 [2024-11-18 20:19:55.543723] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:43.783 [2024-11-18 20:19:55.543742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:43.783 [2024-11-18 20:19:55.551653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:43.783 [2024-11-18 20:19:55.551679] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:43.783 [2024-11-18 20:19:55.551688] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:43.783 [2024-11-18 20:19:55.551695] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:43.783 [2024-11-18 20:19:55.551703] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:43.783 [2024-11-18 20:19:55.551716] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:43.783 [2024-11-18 20:19:55.551725] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:43.783 [2024-11-18 20:19:55.551733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.551752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.551769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:43.783 [2024-11-18 20:19:55.559660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:43.783 [2024-11-18 20:19:55.559684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.783 [2024-11-18 20:19:55.559697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.783 [2024-11-18 20:19:55.559709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.783 [2024-11-18 20:19:55.559720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.783 [2024-11-18 20:19:55.559729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.559741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.559755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:43.783 [2024-11-18 20:19:55.567649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:43.783 [2024-11-18 20:19:55.567672] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:43.783 [2024-11-18 20:19:55.567683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.567695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.567705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.567718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:43.783 [2024-11-18 20:19:55.575646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:43.783 [2024-11-18 20:19:55.575723] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.575741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.575754] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:43.783 [2024-11-18 20:19:55.575762] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:43.783 [2024-11-18 20:19:55.575768] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:43.783 [2024-11-18 20:19:55.575778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:43.783 [2024-11-18 20:19:55.583645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:43.783 [2024-11-18 20:19:55.583667] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:43.783 [2024-11-18 20:19:55.583716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.583734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.583747] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:43.783 [2024-11-18 20:19:55.583756] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:43.783 [2024-11-18 20:19:55.583762] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:43.783 [2024-11-18 20:19:55.583772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:43.783 [2024-11-18 20:19:55.591650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:43.783 [2024-11-18 20:19:55.591695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.591713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.591727] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:43.783 [2024-11-18 20:19:55.591736] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:43.783 [2024-11-18 20:19:55.591742] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:43.783 [2024-11-18 20:19:55.591752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:43.783 [2024-11-18 20:19:55.599649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:43.783 [2024-11-18 20:19:55.599671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.599684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.599698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.599709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.599718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.599727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.599736] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:43.783 [2024-11-18 20:19:55.599743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:43.783 [2024-11-18 20:19:55.599751] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:43.783 [2024-11-18 20:19:55.599777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:43.783 [2024-11-18 20:19:55.607648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:43.783 [2024-11-18 20:19:55.607679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:43.783 [2024-11-18 20:19:55.615648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:43.783 [2024-11-18 20:19:55.615674] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:43.783 [2024-11-18 20:19:55.623645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:43.783 [2024-11-18 20:19:55.623672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:43.783 [2024-11-18 20:19:55.631659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:43.783 [2024-11-18 20:19:55.631691] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:43.783 [2024-11-18 20:19:55.631702] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:43.783 [2024-11-18 20:19:55.631709] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:43.783 [2024-11-18 20:19:55.631714] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:43.783 [2024-11-18 20:19:55.631720] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:43.784 [2024-11-18 20:19:55.631730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:43.784 [2024-11-18 20:19:55.631741] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:43.784 [2024-11-18 20:19:55.631749] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:43.784 [2024-11-18 20:19:55.631755] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:43.784 [2024-11-18 20:19:55.631764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:43.784 [2024-11-18 20:19:55.631774] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:43.784 [2024-11-18 20:19:55.631782] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:43.784 [2024-11-18 20:19:55.631788] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:43.784 [2024-11-18 20:19:55.631796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:43.784 [2024-11-18 20:19:55.631808] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:43.784 [2024-11-18 20:19:55.631816] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:43.784 [2024-11-18 20:19:55.631821] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:43.784 [2024-11-18 20:19:55.631830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:43.784 [2024-11-18 20:19:55.639645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:43.784 [2024-11-18 20:19:55.639672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:43.784 [2024-11-18 20:19:55.639689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:43.784 [2024-11-18 20:19:55.639701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:43.784 ===================================================== 00:18:43.784 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:43.784 ===================================================== 00:18:43.784 Controller Capabilities/Features 00:18:43.784 ================================ 00:18:43.784 Vendor ID: 4e58 00:18:43.784 Subsystem Vendor ID: 4e58 00:18:43.784 Serial Number: SPDK2 00:18:43.784 Model Number: SPDK bdev Controller 00:18:43.784 Firmware Version: 25.01 00:18:43.784 Recommended Arb Burst: 6 00:18:43.784 IEEE OUI Identifier: 8d 6b 50 00:18:43.784 Multi-path I/O 00:18:43.784 May have multiple subsystem ports: Yes 00:18:43.784 May have multiple controllers: Yes 00:18:43.784 Associated with SR-IOV VF: No 00:18:43.784 Max Data Transfer Size: 131072 00:18:43.784 Max Number of Namespaces: 32 00:18:43.784 Max Number of I/O Queues: 127 00:18:43.784 NVMe Specification Version (VS): 1.3 00:18:43.784 NVMe Specification Version (Identify): 1.3 00:18:43.784 Maximum Queue Entries: 256 00:18:43.784 Contiguous Queues Required: Yes 00:18:43.784 Arbitration Mechanisms Supported 00:18:43.784 Weighted Round Robin: Not Supported 00:18:43.784 Vendor Specific: Not Supported 00:18:43.784 Reset Timeout: 15000 ms 00:18:43.784 Doorbell Stride: 4 bytes 00:18:43.784 NVM Subsystem Reset: Not Supported 00:18:43.784 Command Sets Supported 00:18:43.784 NVM Command Set: Supported 00:18:43.784 Boot Partition: Not Supported 00:18:43.784 Memory Page Size Minimum: 4096 bytes 00:18:43.784 Memory Page Size Maximum: 4096 bytes 00:18:43.784 Persistent Memory Region: Not Supported 00:18:43.784 Optional Asynchronous Events Supported 00:18:43.784 Namespace Attribute Notices: Supported 00:18:43.784 Firmware Activation Notices: Not Supported 00:18:43.784 ANA Change Notices: Not Supported 00:18:43.784 PLE Aggregate Log Change Notices: Not Supported 00:18:43.784 LBA Status Info Alert Notices: Not Supported 00:18:43.784 EGE Aggregate Log Change Notices: Not Supported 00:18:43.784 Normal NVM Subsystem Shutdown event: Not Supported 00:18:43.784 Zone Descriptor Change Notices: Not Supported 00:18:43.784 Discovery Log Change Notices: Not Supported 00:18:43.784 Controller Attributes 00:18:43.784 128-bit Host Identifier: Supported 00:18:43.784 Non-Operational Permissive Mode: Not Supported 00:18:43.784 NVM Sets: Not Supported 00:18:43.784 Read Recovery Levels: Not Supported 00:18:43.784 Endurance Groups: Not Supported 00:18:43.784 Predictable Latency Mode: Not Supported 00:18:43.784 Traffic Based Keep ALive: Not Supported 00:18:43.784 Namespace Granularity: Not Supported 00:18:43.784 SQ Associations: Not Supported 00:18:43.784 UUID List: Not Supported 00:18:43.784 Multi-Domain Subsystem: Not Supported 00:18:43.784 Fixed Capacity Management: Not Supported 00:18:43.784 Variable Capacity Management: Not Supported 00:18:43.784 Delete Endurance Group: Not Supported 00:18:43.784 Delete NVM Set: Not Supported 00:18:43.784 Extended LBA Formats Supported: Not Supported 00:18:43.784 Flexible Data Placement Supported: Not Supported 00:18:43.784 00:18:43.784 Controller Memory Buffer Support 00:18:43.784 ================================ 00:18:43.784 Supported: No 00:18:43.784 00:18:43.784 Persistent Memory Region Support 00:18:43.784 ================================ 00:18:43.784 Supported: No 00:18:43.784 00:18:43.784 Admin Command Set Attributes 00:18:43.784 ============================ 00:18:43.784 Security Send/Receive: Not Supported 00:18:43.784 Format NVM: Not Supported 00:18:43.784 Firmware Activate/Download: Not Supported 00:18:43.784 Namespace Management: Not Supported 00:18:43.784 Device Self-Test: Not Supported 00:18:43.784 Directives: Not Supported 00:18:43.784 NVMe-MI: Not Supported 00:18:43.784 Virtualization Management: Not Supported 00:18:43.784 Doorbell Buffer Config: Not Supported 00:18:43.784 Get LBA Status Capability: Not Supported 00:18:43.784 Command & Feature Lockdown Capability: Not Supported 00:18:43.784 Abort Command Limit: 4 00:18:43.784 Async Event Request Limit: 4 00:18:43.784 Number of Firmware Slots: N/A 00:18:43.784 Firmware Slot 1 Read-Only: N/A 00:18:43.784 Firmware Activation Without Reset: N/A 00:18:43.784 Multiple Update Detection Support: N/A 00:18:43.784 Firmware Update Granularity: No Information Provided 00:18:43.784 Per-Namespace SMART Log: No 00:18:43.784 Asymmetric Namespace Access Log Page: Not Supported 00:18:43.784 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:43.784 Command Effects Log Page: Supported 00:18:43.784 Get Log Page Extended Data: Supported 00:18:43.784 Telemetry Log Pages: Not Supported 00:18:43.784 Persistent Event Log Pages: Not Supported 00:18:43.784 Supported Log Pages Log Page: May Support 00:18:43.784 Commands Supported & Effects Log Page: Not Supported 00:18:43.784 Feature Identifiers & Effects Log Page:May Support 00:18:43.784 NVMe-MI Commands & Effects Log Page: May Support 00:18:43.784 Data Area 4 for Telemetry Log: Not Supported 00:18:43.784 Error Log Page Entries Supported: 128 00:18:43.784 Keep Alive: Supported 00:18:43.784 Keep Alive Granularity: 10000 ms 00:18:43.784 00:18:43.784 NVM Command Set Attributes 00:18:43.784 ========================== 00:18:43.784 Submission Queue Entry Size 00:18:43.784 Max: 64 00:18:43.784 Min: 64 00:18:43.784 Completion Queue Entry Size 00:18:43.784 Max: 16 00:18:43.784 Min: 16 00:18:43.784 Number of Namespaces: 32 00:18:43.784 Compare Command: Supported 00:18:43.784 Write Uncorrectable Command: Not Supported 00:18:43.784 Dataset Management Command: Supported 00:18:43.784 Write Zeroes Command: Supported 00:18:43.784 Set Features Save Field: Not Supported 00:18:43.784 Reservations: Not Supported 00:18:43.785 Timestamp: Not Supported 00:18:43.785 Copy: Supported 00:18:43.785 Volatile Write Cache: Present 00:18:43.785 Atomic Write Unit (Normal): 1 00:18:43.785 Atomic Write Unit (PFail): 1 00:18:43.785 Atomic Compare & Write Unit: 1 00:18:43.785 Fused Compare & Write: Supported 00:18:43.785 Scatter-Gather List 00:18:43.785 SGL Command Set: Supported (Dword aligned) 00:18:43.785 SGL Keyed: Not Supported 00:18:43.785 SGL Bit Bucket Descriptor: Not Supported 00:18:43.785 SGL Metadata Pointer: Not Supported 00:18:43.785 Oversized SGL: Not Supported 00:18:43.785 SGL Metadata Address: Not Supported 00:18:43.785 SGL Offset: Not Supported 00:18:43.785 Transport SGL Data Block: Not Supported 00:18:43.785 Replay Protected Memory Block: Not Supported 00:18:43.785 00:18:43.785 Firmware Slot Information 00:18:43.785 ========================= 00:18:43.785 Active slot: 1 00:18:43.785 Slot 1 Firmware Revision: 25.01 00:18:43.785 00:18:43.785 00:18:43.785 Commands Supported and Effects 00:18:43.785 ============================== 00:18:43.785 Admin Commands 00:18:43.785 -------------- 00:18:43.785 Get Log Page (02h): Supported 00:18:43.785 Identify (06h): Supported 00:18:43.785 Abort (08h): Supported 00:18:43.785 Set Features (09h): Supported 00:18:43.785 Get Features (0Ah): Supported 00:18:43.785 Asynchronous Event Request (0Ch): Supported 00:18:43.785 Keep Alive (18h): Supported 00:18:43.785 I/O Commands 00:18:43.785 ------------ 00:18:43.785 Flush (00h): Supported LBA-Change 00:18:43.785 Write (01h): Supported LBA-Change 00:18:43.785 Read (02h): Supported 00:18:43.785 Compare (05h): Supported 00:18:43.785 Write Zeroes (08h): Supported LBA-Change 00:18:43.785 Dataset Management (09h): Supported LBA-Change 00:18:43.785 Copy (19h): Supported LBA-Change 00:18:43.785 00:18:43.785 Error Log 00:18:43.785 ========= 00:18:43.785 00:18:43.785 Arbitration 00:18:43.785 =========== 00:18:43.785 Arbitration Burst: 1 00:18:43.785 00:18:43.785 Power Management 00:18:43.785 ================ 00:18:43.785 Number of Power States: 1 00:18:43.785 Current Power State: Power State #0 00:18:43.785 Power State #0: 00:18:43.785 Max Power: 0.00 W 00:18:43.785 Non-Operational State: Operational 00:18:43.785 Entry Latency: Not Reported 00:18:43.785 Exit Latency: Not Reported 00:18:43.785 Relative Read Throughput: 0 00:18:43.785 Relative Read Latency: 0 00:18:43.785 Relative Write Throughput: 0 00:18:43.785 Relative Write Latency: 0 00:18:43.785 Idle Power: Not Reported 00:18:43.785 Active Power: Not Reported 00:18:43.785 Non-Operational Permissive Mode: Not Supported 00:18:43.785 00:18:43.785 Health Information 00:18:43.785 ================== 00:18:43.785 Critical Warnings: 00:18:43.785 Available Spare Space: OK 00:18:43.785 Temperature: OK 00:18:43.785 Device Reliability: OK 00:18:43.785 Read Only: No 00:18:43.785 Volatile Memory Backup: OK 00:18:43.785 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:43.785 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:43.785 Available Spare: 0% 00:18:43.785 Available Sp[2024-11-18 20:19:55.639823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:43.785 [2024-11-18 20:19:55.647649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:43.785 [2024-11-18 20:19:55.647698] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:43.785 [2024-11-18 20:19:55.647717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.785 [2024-11-18 20:19:55.647728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.785 [2024-11-18 20:19:55.647737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.785 [2024-11-18 20:19:55.647747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.785 [2024-11-18 20:19:55.647812] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:43.785 [2024-11-18 20:19:55.647832] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:43.785 [2024-11-18 20:19:55.648811] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:43.785 [2024-11-18 20:19:55.648899] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:43.785 [2024-11-18 20:19:55.648929] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:43.785 [2024-11-18 20:19:55.649823] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:43.785 [2024-11-18 20:19:55.649847] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:43.785 [2024-11-18 20:19:55.649901] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:43.785 [2024-11-18 20:19:55.651145] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:43.785 are Threshold: 0% 00:18:43.785 Life Percentage Used: 0% 00:18:43.785 Data Units Read: 0 00:18:43.785 Data Units Written: 0 00:18:43.785 Host Read Commands: 0 00:18:43.785 Host Write Commands: 0 00:18:43.785 Controller Busy Time: 0 minutes 00:18:43.785 Power Cycles: 0 00:18:43.785 Power On Hours: 0 hours 00:18:43.785 Unsafe Shutdowns: 0 00:18:43.785 Unrecoverable Media Errors: 0 00:18:43.785 Lifetime Error Log Entries: 0 00:18:43.785 Warning Temperature Time: 0 minutes 00:18:43.785 Critical Temperature Time: 0 minutes 00:18:43.785 00:18:43.785 Number of Queues 00:18:43.785 ================ 00:18:43.785 Number of I/O Submission Queues: 127 00:18:43.785 Number of I/O Completion Queues: 127 00:18:43.785 00:18:43.785 Active Namespaces 00:18:43.785 ================= 00:18:43.785 Namespace ID:1 00:18:43.785 Error Recovery Timeout: Unlimited 00:18:43.785 Command Set Identifier: NVM (00h) 00:18:43.785 Deallocate: Supported 00:18:43.785 Deallocated/Unwritten Error: Not Supported 00:18:43.785 Deallocated Read Value: Unknown 00:18:43.785 Deallocate in Write Zeroes: Not Supported 00:18:43.785 Deallocated Guard Field: 0xFFFF 00:18:43.785 Flush: Supported 00:18:43.785 Reservation: Supported 00:18:43.785 Namespace Sharing Capabilities: Multiple Controllers 00:18:43.785 Size (in LBAs): 131072 (0GiB) 00:18:43.785 Capacity (in LBAs): 131072 (0GiB) 00:18:43.785 Utilization (in LBAs): 131072 (0GiB) 00:18:43.785 NGUID: 0FEDE46855364226898231D74FAA6FF5 00:18:43.785 UUID: 0fede468-5536-4226-8982-31d74faa6ff5 00:18:43.785 Thin Provisioning: Not Supported 00:18:43.785 Per-NS Atomic Units: Yes 00:18:43.785 Atomic Boundary Size (Normal): 0 00:18:43.785 Atomic Boundary Size (PFail): 0 00:18:43.785 Atomic Boundary Offset: 0 00:18:43.785 Maximum Single Source Range Length: 65535 00:18:43.785 Maximum Copy Length: 65535 00:18:43.785 Maximum Source Range Count: 1 00:18:43.785 NGUID/EUI64 Never Reused: No 00:18:43.785 Namespace Write Protected: No 00:18:43.785 Number of LBA Formats: 1 00:18:43.785 Current LBA Format: LBA Format #00 00:18:43.785 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:43.785 00:18:43.785 20:19:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:44.045 [2024-11-18 20:19:55.900434] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:49.321 Initializing NVMe Controllers 00:18:49.321 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:49.321 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:49.321 Initialization complete. Launching workers. 00:18:49.321 ======================================================== 00:18:49.321 Latency(us) 00:18:49.321 Device Information : IOPS MiB/s Average min max 00:18:49.321 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34202.10 133.60 3741.61 1177.88 7391.72 00:18:49.321 ======================================================== 00:18:49.321 Total : 34202.10 133.60 3741.61 1177.88 7391.72 00:18:49.321 00:18:49.321 [2024-11-18 20:20:01.008027] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:49.321 20:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:49.321 [2024-11-18 20:20:01.261755] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:54.602 Initializing NVMe Controllers 00:18:54.602 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:54.602 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:54.602 Initialization complete. Launching workers. 00:18:54.602 ======================================================== 00:18:54.602 Latency(us) 00:18:54.602 Device Information : IOPS MiB/s Average min max 00:18:54.602 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31757.97 124.05 4031.45 1221.51 8208.27 00:18:54.602 ======================================================== 00:18:54.602 Total : 31757.97 124.05 4031.45 1221.51 8208.27 00:18:54.602 00:18:54.602 [2024-11-18 20:20:06.284864] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:54.602 20:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:54.602 [2024-11-18 20:20:06.508736] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:59.883 [2024-11-18 20:20:11.651783] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:59.883 Initializing NVMe Controllers 00:18:59.883 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:59.883 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:59.883 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:59.883 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:59.883 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:59.883 Initialization complete. Launching workers. 00:18:59.883 Starting thread on core 2 00:18:59.883 Starting thread on core 3 00:18:59.883 Starting thread on core 1 00:18:59.883 20:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:00.144 [2024-11-18 20:20:11.965164] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:04.341 [2024-11-18 20:20:15.821902] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:04.341 Initializing NVMe Controllers 00:19:04.341 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:04.341 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:04.341 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:04.341 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:04.341 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:04.341 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:04.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:04.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:04.341 Initialization complete. Launching workers. 00:19:04.341 Starting thread on core 1 with urgent priority queue 00:19:04.341 Starting thread on core 2 with urgent priority queue 00:19:04.341 Starting thread on core 3 with urgent priority queue 00:19:04.341 Starting thread on core 0 with urgent priority queue 00:19:04.341 SPDK bdev Controller (SPDK2 ) core 0: 4590.67 IO/s 21.78 secs/100000 ios 00:19:04.341 SPDK bdev Controller (SPDK2 ) core 1: 4057.33 IO/s 24.65 secs/100000 ios 00:19:04.341 SPDK bdev Controller (SPDK2 ) core 2: 4630.33 IO/s 21.60 secs/100000 ios 00:19:04.341 SPDK bdev Controller (SPDK2 ) core 3: 3819.67 IO/s 26.18 secs/100000 ios 00:19:04.341 ======================================================== 00:19:04.341 00:19:04.341 20:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:04.341 [2024-11-18 20:20:16.140166] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:04.341 Initializing NVMe Controllers 00:19:04.341 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:04.341 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:04.341 Namespace ID: 1 size: 0GB 00:19:04.341 Initialization complete. 00:19:04.341 INFO: using host memory buffer for IO 00:19:04.341 Hello world! 00:19:04.341 [2024-11-18 20:20:16.153236] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:04.341 20:20:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:04.600 [2024-11-18 20:20:16.456987] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:05.542 Initializing NVMe Controllers 00:19:05.542 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:05.542 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:05.542 Initialization complete. Launching workers. 00:19:05.542 submit (in ns) avg, min, max = 8018.6, 3522.2, 4999712.2 00:19:05.542 complete (in ns) avg, min, max = 24029.8, 2066.7, 5016452.2 00:19:05.542 00:19:05.542 Submit histogram 00:19:05.542 ================ 00:19:05.542 Range in us Cumulative Count 00:19:05.542 3.508 - 3.532: 0.0615% ( 8) 00:19:05.542 3.532 - 3.556: 0.4455% ( 50) 00:19:05.542 3.556 - 3.579: 1.5133% ( 139) 00:19:05.542 3.579 - 3.603: 4.4400% ( 381) 00:19:05.542 3.603 - 3.627: 9.1412% ( 612) 00:19:05.542 3.627 - 3.650: 18.0673% ( 1162) 00:19:05.542 3.650 - 3.674: 27.1470% ( 1182) 00:19:05.542 3.674 - 3.698: 37.8630% ( 1395) 00:19:05.542 3.698 - 3.721: 46.8044% ( 1164) 00:19:05.542 3.721 - 3.745: 53.9100% ( 925) 00:19:05.542 3.745 - 3.769: 58.5574% ( 605) 00:19:05.542 3.769 - 3.793: 63.5274% ( 647) 00:19:05.542 3.793 - 3.816: 67.5910% ( 529) 00:19:05.542 3.816 - 3.840: 71.1400% ( 462) 00:19:05.542 3.840 - 3.864: 74.5199% ( 440) 00:19:05.542 3.864 - 3.887: 77.8461% ( 433) 00:19:05.542 3.887 - 3.911: 80.7420% ( 377) 00:19:05.542 3.911 - 3.935: 84.0068% ( 425) 00:19:05.542 3.935 - 3.959: 86.5878% ( 336) 00:19:05.542 3.959 - 3.982: 88.5697% ( 258) 00:19:05.542 3.982 - 4.006: 90.5439% ( 257) 00:19:05.542 4.006 - 4.030: 92.1877% ( 214) 00:19:05.542 4.030 - 4.053: 93.5628% ( 179) 00:19:05.542 4.053 - 4.077: 94.5614% ( 130) 00:19:05.542 4.077 - 4.101: 95.2143% ( 85) 00:19:05.542 4.101 - 4.124: 95.7290% ( 67) 00:19:05.542 4.124 - 4.148: 96.0363% ( 40) 00:19:05.542 4.148 - 4.172: 96.2590% ( 29) 00:19:05.542 4.172 - 4.196: 96.3896% ( 17) 00:19:05.542 4.196 - 4.219: 96.5279% ( 18) 00:19:05.542 4.219 - 4.243: 96.6662% ( 18) 00:19:05.542 4.243 - 4.267: 96.7353% ( 9) 00:19:05.542 4.267 - 4.290: 96.8352% ( 13) 00:19:05.542 4.290 - 4.314: 96.8966% ( 8) 00:19:05.542 4.314 - 4.338: 96.9427% ( 6) 00:19:05.542 4.338 - 4.361: 96.9965% ( 7) 00:19:05.542 4.361 - 4.385: 97.0426% ( 6) 00:19:05.542 4.385 - 4.409: 97.0886% ( 6) 00:19:05.542 4.409 - 4.433: 97.1040% ( 2) 00:19:05.542 4.433 - 4.456: 97.1194% ( 2) 00:19:05.542 4.456 - 4.480: 97.1347% ( 2) 00:19:05.542 4.480 - 4.504: 97.1424% ( 1) 00:19:05.542 4.527 - 4.551: 97.1731% ( 4) 00:19:05.542 4.551 - 4.575: 97.1808% ( 1) 00:19:05.542 4.599 - 4.622: 97.1885% ( 1) 00:19:05.542 4.646 - 4.670: 97.1962% ( 1) 00:19:05.542 4.670 - 4.693: 97.2116% ( 2) 00:19:05.542 4.693 - 4.717: 97.2346% ( 3) 00:19:05.542 4.717 - 4.741: 97.2653% ( 4) 00:19:05.542 4.741 - 4.764: 97.2807% ( 2) 00:19:05.542 4.764 - 4.788: 97.3114% ( 4) 00:19:05.542 4.788 - 4.812: 97.3652% ( 7) 00:19:05.542 4.812 - 4.836: 97.4036% ( 5) 00:19:05.542 4.836 - 4.859: 97.4650% ( 8) 00:19:05.542 4.859 - 4.883: 97.4958% ( 4) 00:19:05.542 4.883 - 4.907: 97.5342% ( 5) 00:19:05.542 4.907 - 4.930: 97.5880% ( 7) 00:19:05.542 4.930 - 4.954: 97.6187% ( 4) 00:19:05.542 4.954 - 4.978: 97.6725% ( 7) 00:19:05.542 4.978 - 5.001: 97.6955% ( 3) 00:19:05.542 5.001 - 5.025: 97.7185% ( 3) 00:19:05.542 5.025 - 5.049: 97.7723% ( 7) 00:19:05.542 5.049 - 5.073: 97.8184% ( 6) 00:19:05.542 5.073 - 5.096: 97.8568% ( 5) 00:19:05.542 5.096 - 5.120: 97.9029% ( 6) 00:19:05.542 5.120 - 5.144: 97.9567% ( 7) 00:19:05.542 5.144 - 5.167: 98.0181% ( 8) 00:19:05.542 5.167 - 5.191: 98.0335% ( 2) 00:19:05.542 5.191 - 5.215: 98.0489% ( 2) 00:19:05.542 5.215 - 5.239: 98.0642% ( 2) 00:19:05.542 5.239 - 5.262: 98.0873% ( 3) 00:19:05.542 5.262 - 5.286: 98.1103% ( 3) 00:19:05.542 5.286 - 5.310: 98.1180% ( 1) 00:19:05.542 5.357 - 5.381: 98.1334% ( 2) 00:19:05.542 5.381 - 5.404: 98.1564% ( 3) 00:19:05.542 5.428 - 5.452: 98.1871% ( 4) 00:19:05.542 5.523 - 5.547: 98.1948% ( 1) 00:19:05.542 5.570 - 5.594: 98.2102% ( 2) 00:19:05.542 5.689 - 5.713: 98.2179% ( 1) 00:19:05.542 5.784 - 5.807: 98.2255% ( 1) 00:19:05.542 5.902 - 5.926: 98.2332% ( 1) 00:19:05.542 5.926 - 5.950: 98.2409% ( 1) 00:19:05.542 5.950 - 5.973: 98.2486% ( 1) 00:19:05.542 5.973 - 5.997: 98.2563% ( 1) 00:19:05.542 6.163 - 6.210: 98.2639% ( 1) 00:19:05.542 6.210 - 6.258: 98.2870% ( 3) 00:19:05.542 6.684 - 6.732: 98.2947% ( 1) 00:19:05.542 6.779 - 6.827: 98.3024% ( 1) 00:19:05.542 6.969 - 7.016: 98.3100% ( 1) 00:19:05.542 7.159 - 7.206: 98.3177% ( 1) 00:19:05.542 7.206 - 7.253: 98.3408% ( 3) 00:19:05.542 7.348 - 7.396: 98.3561% ( 2) 00:19:05.542 7.396 - 7.443: 98.3792% ( 3) 00:19:05.542 7.443 - 7.490: 98.4022% ( 3) 00:19:05.542 7.490 - 7.538: 98.4176% ( 2) 00:19:05.542 7.538 - 7.585: 98.4253% ( 1) 00:19:05.542 7.585 - 7.633: 98.4637% ( 5) 00:19:05.542 7.727 - 7.775: 98.4713% ( 1) 00:19:05.542 7.775 - 7.822: 98.4790% ( 1) 00:19:05.542 7.822 - 7.870: 98.4944% ( 2) 00:19:05.542 7.870 - 7.917: 98.5021% ( 1) 00:19:05.542 7.917 - 7.964: 98.5251% ( 3) 00:19:05.542 7.964 - 8.012: 98.5328% ( 1) 00:19:05.542 8.012 - 8.059: 98.5558% ( 3) 00:19:05.543 8.059 - 8.107: 98.5712% ( 2) 00:19:05.543 8.107 - 8.154: 98.5866% ( 2) 00:19:05.543 8.154 - 8.201: 98.5943% ( 1) 00:19:05.543 8.201 - 8.249: 98.6096% ( 2) 00:19:05.543 8.249 - 8.296: 98.6327% ( 3) 00:19:05.543 8.391 - 8.439: 98.6403% ( 1) 00:19:05.543 8.439 - 8.486: 98.6480% ( 1) 00:19:05.543 8.486 - 8.533: 98.6557% ( 1) 00:19:05.543 8.723 - 8.770: 98.6711% ( 2) 00:19:05.543 8.770 - 8.818: 98.6788% ( 1) 00:19:05.543 8.818 - 8.865: 98.6941% ( 2) 00:19:05.543 8.865 - 8.913: 98.7018% ( 1) 00:19:05.543 8.913 - 8.960: 98.7095% ( 1) 00:19:05.543 8.960 - 9.007: 98.7172% ( 1) 00:19:05.543 9.007 - 9.055: 98.7248% ( 1) 00:19:05.543 9.055 - 9.102: 98.7402% ( 2) 00:19:05.543 9.102 - 9.150: 98.7479% ( 1) 00:19:05.543 9.197 - 9.244: 98.7556% ( 1) 00:19:05.543 9.244 - 9.292: 98.7633% ( 1) 00:19:05.543 9.292 - 9.339: 98.7786% ( 2) 00:19:05.543 9.434 - 9.481: 98.7863% ( 1) 00:19:05.543 9.529 - 9.576: 98.7940% ( 1) 00:19:05.543 9.671 - 9.719: 98.8093% ( 2) 00:19:05.543 9.719 - 9.766: 98.8170% ( 1) 00:19:05.543 9.861 - 9.908: 98.8247% ( 1) 00:19:05.543 9.908 - 9.956: 98.8324% ( 1) 00:19:05.543 10.003 - 10.050: 98.8401% ( 1) 00:19:05.543 10.145 - 10.193: 98.8477% ( 1) 00:19:05.543 10.240 - 10.287: 98.8631% ( 2) 00:19:05.543 10.430 - 10.477: 98.8785% ( 2) 00:19:05.543 10.477 - 10.524: 98.8862% ( 1) 00:19:05.543 10.524 - 10.572: 98.8938% ( 1) 00:19:05.543 10.714 - 10.761: 98.9015% ( 1) 00:19:05.543 10.761 - 10.809: 98.9092% ( 1) 00:19:05.543 10.904 - 10.951: 98.9169% ( 1) 00:19:05.543 10.999 - 11.046: 98.9246% ( 1) 00:19:05.543 11.141 - 11.188: 98.9322% ( 1) 00:19:05.543 11.236 - 11.283: 98.9399% ( 1) 00:19:05.543 11.757 - 11.804: 98.9553% ( 2) 00:19:05.543 11.899 - 11.947: 98.9630% ( 1) 00:19:05.543 11.994 - 12.041: 98.9707% ( 1) 00:19:05.543 12.421 - 12.516: 98.9860% ( 2) 00:19:05.543 12.800 - 12.895: 99.0091% ( 3) 00:19:05.543 12.895 - 12.990: 99.0167% ( 1) 00:19:05.543 12.990 - 13.084: 99.0398% ( 3) 00:19:05.543 13.084 - 13.179: 99.0552% ( 2) 00:19:05.543 13.274 - 13.369: 99.0705% ( 2) 00:19:05.543 13.369 - 13.464: 99.1012% ( 4) 00:19:05.543 13.653 - 13.748: 99.1089% ( 1) 00:19:05.543 13.843 - 13.938: 99.1166% ( 1) 00:19:05.543 13.938 - 14.033: 99.1320% ( 2) 00:19:05.543 14.033 - 14.127: 99.1397% ( 1) 00:19:05.543 14.791 - 14.886: 99.1473% ( 1) 00:19:05.543 14.886 - 14.981: 99.1550% ( 1) 00:19:05.543 15.076 - 15.170: 99.1704% ( 2) 00:19:05.543 15.170 - 15.265: 99.1781% ( 1) 00:19:05.543 17.256 - 17.351: 99.1934% ( 2) 00:19:05.543 17.351 - 17.446: 99.2165% ( 3) 00:19:05.543 17.446 - 17.541: 99.2395% ( 3) 00:19:05.543 17.541 - 17.636: 99.2856% ( 6) 00:19:05.543 17.636 - 17.730: 99.3086% ( 3) 00:19:05.543 17.730 - 17.825: 99.3394% ( 4) 00:19:05.543 17.825 - 17.920: 99.4008% ( 8) 00:19:05.543 17.920 - 18.015: 99.4316% ( 4) 00:19:05.543 18.015 - 18.110: 99.4776% ( 6) 00:19:05.543 18.110 - 18.204: 99.5007% ( 3) 00:19:05.543 18.204 - 18.299: 99.5545% ( 7) 00:19:05.543 18.299 - 18.394: 99.5775% ( 3) 00:19:05.543 18.394 - 18.489: 99.6466% ( 9) 00:19:05.543 18.489 - 18.584: 99.6851% ( 5) 00:19:05.543 18.584 - 18.679: 99.7542% ( 9) 00:19:05.543 18.679 - 18.773: 99.7695% ( 2) 00:19:05.543 18.773 - 18.868: 99.7772% ( 1) 00:19:05.543 18.963 - 19.058: 99.7849% ( 1) 00:19:05.543 19.058 - 19.153: 99.7926% ( 1) 00:19:05.543 19.153 - 19.247: 99.8003% ( 1) 00:19:05.543 19.247 - 19.342: 99.8080% ( 1) 00:19:05.543 19.437 - 19.532: 99.8233% ( 2) 00:19:05.543 19.627 - 19.721: 99.8310% ( 1) 00:19:05.543 20.101 - 20.196: 99.8387% ( 1) 00:19:05.543 22.281 - 22.376: 99.8464% ( 1) 00:19:05.543 22.376 - 22.471: 99.8540% ( 1) 00:19:05.543 22.471 - 22.566: 99.8617% ( 1) 00:19:05.543 23.704 - 23.799: 99.8694% ( 1) 00:19:05.543 25.979 - 26.169: 99.8771% ( 1) 00:19:05.543 26.359 - 26.548: 99.8848% ( 1) 00:19:05.543 27.496 - 27.686: 99.8925% ( 1) 00:19:05.543 31.099 - 31.289: 99.9001% ( 1) 00:19:05.543 3046.210 - 3058.347: 99.9078% ( 1) 00:19:05.543 3980.705 - 4004.978: 99.9770% ( 9) 00:19:05.543 4004.978 - 4029.250: 99.9846% ( 1) 00:19:05.543 4975.881 - 5000.154: 100.0000% ( 2) 00:19:05.543 00:19:05.543 Complete histogram 00:19:05.543 ================== 00:19:05.543 Range in us Cumulative Count 00:19:05.543 2.062 - 2.074: 1.0908% ( 142) 00:19:05.543 2.074 - 2.086: 33.5382% ( 4224) 00:19:05.543 2.086 - 2.098: 45.9210% ( 1612) 00:19:05.543 2.098 - 2.110: 49.6006% ( 479) 00:19:05.543 2.110 - 2.121: 59.7788% ( 1325) 00:19:05.543 2.121 - 2.133: 62.3598% ( 336) 00:19:05.543 2.133 - 2.145: 67.0149% ( 606) 00:19:05.543 2.145 - 2.157: 80.9264% ( 1811) 00:19:05.543 2.157 - 2.169: 83.0850% ( 281) 00:19:05.543 2.169 - 2.181: 84.9670% ( 245) 00:19:05.543 2.181 - 2.193: 88.3623% ( 442) 00:19:05.543 2.193 - 2.204: 89.3148% ( 124) 00:19:05.543 2.204 - 2.216: 90.2366% ( 120) 00:19:05.543 2.216 - 2.228: 91.7192% ( 193) 00:19:05.543 2.228 - 2.240: 93.4245% ( 222) 00:19:05.543 2.240 - 2.252: 94.4769% ( 137) 00:19:05.543 2.252 - 2.264: 94.8379% ( 47) 00:19:05.543 2.264 - 2.276: 94.9762% ( 18) 00:19:05.543 2.276 - 2.287: 95.1529% ( 23) 00:19:05.543 2.287 - 2.299: 95.3372% ( 24) 00:19:05.543 2.299 - 2.311: 95.5830% ( 32) 00:19:05.543 2.311 - 2.323: 95.8212% ( 31) 00:19:05.543 2.323 - 2.335: 95.8903% ( 9) 00:19:05.543 2.335 - 2.347: 95.9287% ( 5) 00:19:05.543 2.347 - 2.359: 96.0439% ( 15) 00:19:05.543 2.359 - 2.370: 96.2513% ( 27) 00:19:05.543 2.370 - 2.382: 96.5356% ( 37) 00:19:05.543 2.382 - 2.394: 96.8659% ( 43) 00:19:05.543 2.394 - 2.406: 97.1962% ( 43) 00:19:05.543 2.406 - 2.418: 97.4343% ( 31) 00:19:05.543 2.418 - 2.430: 97.6571% ( 29) 00:19:05.543 2.430 - 2.441: 97.8338% ( 23) 00:19:05.543 2.441 - 2.453: 97.9567% ( 16) 00:19:05.543 2.453 - 2.465: 98.0642% ( 14) 00:19:05.543 2.465 - 2.477: 98.1487% ( 11) 00:19:05.543 2.477 - 2.489: 98.2025% ( 7) 00:19:05.543 2.489 - 2.501: 98.2332% ( 4) 00:19:05.543 2.501 - 2.513: 98.2793% ( 6) 00:19:05.543 2.513 - 2.524: 98.3254% ( 6) 00:19:05.543 2.524 - 2.536: 98.3561% ( 4) 00:19:05.543 2.536 - 2.548: 98.3638% ( 1) 00:19:05.543 2.548 - 2.560: 98.3715% ( 1) 00:19:05.543 2.560 - 2.572: 98.4022% ( 4) 00:19:05.543 2.572 - 2.584: 98.4253% ( 3) 00:19:05.543 2.607 - 2.619: 98.4406% ( 2) 00:19:05.543 2.643 - 2.655: 98.4790% ( 5) 00:19:05.543 2.667 - 2.679: 98.4944% ( 2) 00:19:05.543 2.702 - 2.714: 98.5098% ( 2) 00:19:05.543 2.750 - 2.761: 98.5174% ( 1) 00:19:05.543 2.773 - 2.785: 98.5251% ( 1) 00:19:05.543 2.785 - 2.797: 98.5328% ( 1) 00:19:05.543 2.797 - 2.809: 98.5482% ( 2) 00:19:05.543 2.809 - 2.821: 98.5558% ( 1) 00:19:05.543 2.821 - 2.833: 98.5635% ( 1) 00:19:05.543 2.856 - 2.868: 98.5712% ( 1) 00:19:05.803 2.904 - 2.916: 9[2024-11-18 20:20:17.552460] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:05.803 8.5789% ( 1) 00:19:05.803 2.951 - 2.963: 98.5866% ( 1) 00:19:05.803 2.987 - 2.999: 98.5943% ( 1) 00:19:05.803 3.022 - 3.034: 98.6096% ( 2) 00:19:05.803 3.058 - 3.081: 98.6173% ( 1) 00:19:05.803 3.342 - 3.366: 98.6327% ( 2) 00:19:05.803 3.437 - 3.461: 98.6480% ( 2) 00:19:05.803 3.461 - 3.484: 98.6557% ( 1) 00:19:05.803 3.484 - 3.508: 98.6711% ( 2) 00:19:05.803 3.508 - 3.532: 98.6788% ( 1) 00:19:05.803 3.532 - 3.556: 98.7018% ( 3) 00:19:05.803 3.556 - 3.579: 98.7172% ( 2) 00:19:05.803 3.603 - 3.627: 98.7248% ( 1) 00:19:05.803 3.627 - 3.650: 98.7402% ( 2) 00:19:05.803 3.698 - 3.721: 98.7479% ( 1) 00:19:05.803 3.721 - 3.745: 98.7556% ( 1) 00:19:05.803 3.769 - 3.793: 98.7786% ( 3) 00:19:05.803 3.793 - 3.816: 98.7863% ( 1) 00:19:05.803 3.816 - 3.840: 98.7940% ( 1) 00:19:05.803 3.840 - 3.864: 98.8093% ( 2) 00:19:05.803 3.864 - 3.887: 98.8247% ( 2) 00:19:05.803 4.006 - 4.030: 98.8324% ( 1) 00:19:05.803 4.030 - 4.053: 98.8401% ( 1) 00:19:05.803 4.124 - 4.148: 98.8477% ( 1) 00:19:05.803 4.196 - 4.219: 98.8554% ( 1) 00:19:05.803 4.267 - 4.290: 98.8631% ( 1) 00:19:05.803 4.456 - 4.480: 98.8708% ( 1) 00:19:05.803 5.073 - 5.096: 98.8785% ( 1) 00:19:05.804 5.262 - 5.286: 98.8862% ( 1) 00:19:05.804 5.357 - 5.381: 98.8938% ( 1) 00:19:05.804 5.641 - 5.665: 98.9015% ( 1) 00:19:05.804 5.689 - 5.713: 98.9092% ( 1) 00:19:05.804 5.831 - 5.855: 98.9169% ( 1) 00:19:05.804 5.879 - 5.902: 98.9246% ( 1) 00:19:05.804 5.973 - 5.997: 98.9322% ( 1) 00:19:05.804 6.068 - 6.116: 98.9399% ( 1) 00:19:05.804 6.163 - 6.210: 98.9476% ( 1) 00:19:05.804 6.305 - 6.353: 98.9630% ( 2) 00:19:05.804 6.542 - 6.590: 98.9707% ( 1) 00:19:05.804 6.874 - 6.921: 98.9783% ( 1) 00:19:05.804 7.111 - 7.159: 98.9860% ( 1) 00:19:05.804 7.159 - 7.206: 99.0014% ( 2) 00:19:05.804 7.396 - 7.443: 99.0091% ( 1) 00:19:05.804 7.727 - 7.775: 99.0167% ( 1) 00:19:05.804 7.775 - 7.822: 99.0244% ( 1) 00:19:05.804 15.360 - 15.455: 99.0321% ( 1) 00:19:05.804 15.550 - 15.644: 99.0475% ( 2) 00:19:05.804 15.644 - 15.739: 99.0628% ( 2) 00:19:05.804 15.739 - 15.834: 99.0705% ( 1) 00:19:05.804 15.834 - 15.929: 99.0782% ( 1) 00:19:05.804 15.929 - 16.024: 99.0936% ( 2) 00:19:05.804 16.024 - 16.119: 99.1243% ( 4) 00:19:05.804 16.119 - 16.213: 99.1704% ( 6) 00:19:05.804 16.213 - 16.308: 99.1934% ( 3) 00:19:05.804 16.308 - 16.403: 99.2165% ( 3) 00:19:05.804 16.403 - 16.498: 99.2242% ( 1) 00:19:05.804 16.498 - 16.593: 99.2472% ( 3) 00:19:05.804 16.593 - 16.687: 99.2702% ( 3) 00:19:05.804 16.687 - 16.782: 99.2779% ( 1) 00:19:05.804 16.782 - 16.877: 99.3240% ( 6) 00:19:05.804 16.877 - 16.972: 99.3471% ( 3) 00:19:05.804 16.972 - 17.067: 99.3624% ( 2) 00:19:05.804 17.351 - 17.446: 99.3778% ( 2) 00:19:05.804 17.636 - 17.730: 99.3931% ( 2) 00:19:05.804 17.825 - 17.920: 99.4008% ( 1) 00:19:05.804 17.920 - 18.015: 99.4162% ( 2) 00:19:05.804 18.110 - 18.204: 99.4316% ( 2) 00:19:05.804 18.299 - 18.394: 99.4392% ( 1) 00:19:05.804 18.679 - 18.773: 99.4469% ( 1) 00:19:05.804 25.790 - 25.979: 99.4546% ( 1) 00:19:05.804 3009.801 - 3021.938: 99.4623% ( 1) 00:19:05.804 3034.074 - 3046.210: 99.4700% ( 1) 00:19:05.804 3980.705 - 4004.978: 99.8694% ( 52) 00:19:05.804 4004.978 - 4029.250: 99.9846% ( 15) 00:19:05.804 4975.881 - 5000.154: 99.9923% ( 1) 00:19:05.804 5000.154 - 5024.427: 100.0000% ( 1) 00:19:05.804 00:19:05.804 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:05.804 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:05.804 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:05.804 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:05.804 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:06.063 [ 00:19:06.063 { 00:19:06.063 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:06.063 "subtype": "Discovery", 00:19:06.063 "listen_addresses": [], 00:19:06.063 "allow_any_host": true, 00:19:06.063 "hosts": [] 00:19:06.063 }, 00:19:06.063 { 00:19:06.063 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:06.063 "subtype": "NVMe", 00:19:06.063 "listen_addresses": [ 00:19:06.063 { 00:19:06.063 "trtype": "VFIOUSER", 00:19:06.063 "adrfam": "IPv4", 00:19:06.063 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:06.063 "trsvcid": "0" 00:19:06.063 } 00:19:06.063 ], 00:19:06.063 "allow_any_host": true, 00:19:06.063 "hosts": [], 00:19:06.063 "serial_number": "SPDK1", 00:19:06.063 "model_number": "SPDK bdev Controller", 00:19:06.063 "max_namespaces": 32, 00:19:06.063 "min_cntlid": 1, 00:19:06.063 "max_cntlid": 65519, 00:19:06.063 "namespaces": [ 00:19:06.063 { 00:19:06.063 "nsid": 1, 00:19:06.063 "bdev_name": "Malloc1", 00:19:06.063 "name": "Malloc1", 00:19:06.063 "nguid": "CAFC6A1E944F47F3992B78A6B9E2E087", 00:19:06.063 "uuid": "cafc6a1e-944f-47f3-992b-78a6b9e2e087" 00:19:06.063 }, 00:19:06.063 { 00:19:06.063 "nsid": 2, 00:19:06.063 "bdev_name": "Malloc3", 00:19:06.063 "name": "Malloc3", 00:19:06.063 "nguid": "3A7625C705734EB69530896D15E7BF0B", 00:19:06.063 "uuid": "3a7625c7-0573-4eb6-9530-896d15e7bf0b" 00:19:06.063 } 00:19:06.063 ] 00:19:06.063 }, 00:19:06.063 { 00:19:06.063 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:06.063 "subtype": "NVMe", 00:19:06.063 "listen_addresses": [ 00:19:06.063 { 00:19:06.063 "trtype": "VFIOUSER", 00:19:06.063 "adrfam": "IPv4", 00:19:06.063 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:06.063 "trsvcid": "0" 00:19:06.063 } 00:19:06.063 ], 00:19:06.063 "allow_any_host": true, 00:19:06.063 "hosts": [], 00:19:06.063 "serial_number": "SPDK2", 00:19:06.063 "model_number": "SPDK bdev Controller", 00:19:06.063 "max_namespaces": 32, 00:19:06.063 "min_cntlid": 1, 00:19:06.063 "max_cntlid": 65519, 00:19:06.063 "namespaces": [ 00:19:06.063 { 00:19:06.063 "nsid": 1, 00:19:06.063 "bdev_name": "Malloc2", 00:19:06.063 "name": "Malloc2", 00:19:06.063 "nguid": "0FEDE46855364226898231D74FAA6FF5", 00:19:06.063 "uuid": "0fede468-5536-4226-8982-31d74faa6ff5" 00:19:06.063 } 00:19:06.063 ] 00:19:06.063 } 00:19:06.063 ] 00:19:06.063 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:06.063 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=234163 00:19:06.063 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:06.063 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:06.063 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:06.063 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:06.063 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:19:06.063 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:19:06.063 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:06.063 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:06.063 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:19:06.063 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:19:06.063 20:20:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:06.063 [2024-11-18 20:20:18.058107] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:06.322 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:06.322 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:06.322 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:06.322 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:06.322 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:06.580 Malloc4 00:19:06.580 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:06.839 [2024-11-18 20:20:18.659512] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:06.839 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:06.839 Asynchronous Event Request test 00:19:06.839 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:06.839 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:06.839 Registering asynchronous event callbacks... 00:19:06.839 Starting namespace attribute notice tests for all controllers... 00:19:06.839 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:06.839 aer_cb - Changed Namespace 00:19:06.839 Cleaning up... 00:19:07.099 [ 00:19:07.099 { 00:19:07.099 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:07.099 "subtype": "Discovery", 00:19:07.099 "listen_addresses": [], 00:19:07.099 "allow_any_host": true, 00:19:07.099 "hosts": [] 00:19:07.099 }, 00:19:07.099 { 00:19:07.099 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:07.099 "subtype": "NVMe", 00:19:07.099 "listen_addresses": [ 00:19:07.099 { 00:19:07.099 "trtype": "VFIOUSER", 00:19:07.099 "adrfam": "IPv4", 00:19:07.099 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:07.099 "trsvcid": "0" 00:19:07.099 } 00:19:07.099 ], 00:19:07.099 "allow_any_host": true, 00:19:07.099 "hosts": [], 00:19:07.099 "serial_number": "SPDK1", 00:19:07.099 "model_number": "SPDK bdev Controller", 00:19:07.099 "max_namespaces": 32, 00:19:07.099 "min_cntlid": 1, 00:19:07.099 "max_cntlid": 65519, 00:19:07.099 "namespaces": [ 00:19:07.099 { 00:19:07.099 "nsid": 1, 00:19:07.099 "bdev_name": "Malloc1", 00:19:07.099 "name": "Malloc1", 00:19:07.099 "nguid": "CAFC6A1E944F47F3992B78A6B9E2E087", 00:19:07.099 "uuid": "cafc6a1e-944f-47f3-992b-78a6b9e2e087" 00:19:07.099 }, 00:19:07.099 { 00:19:07.099 "nsid": 2, 00:19:07.099 "bdev_name": "Malloc3", 00:19:07.099 "name": "Malloc3", 00:19:07.099 "nguid": "3A7625C705734EB69530896D15E7BF0B", 00:19:07.099 "uuid": "3a7625c7-0573-4eb6-9530-896d15e7bf0b" 00:19:07.099 } 00:19:07.099 ] 00:19:07.099 }, 00:19:07.099 { 00:19:07.099 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:07.099 "subtype": "NVMe", 00:19:07.099 "listen_addresses": [ 00:19:07.099 { 00:19:07.099 "trtype": "VFIOUSER", 00:19:07.099 "adrfam": "IPv4", 00:19:07.099 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:07.099 "trsvcid": "0" 00:19:07.099 } 00:19:07.099 ], 00:19:07.099 "allow_any_host": true, 00:19:07.099 "hosts": [], 00:19:07.099 "serial_number": "SPDK2", 00:19:07.099 "model_number": "SPDK bdev Controller", 00:19:07.099 "max_namespaces": 32, 00:19:07.099 "min_cntlid": 1, 00:19:07.099 "max_cntlid": 65519, 00:19:07.099 "namespaces": [ 00:19:07.099 { 00:19:07.099 "nsid": 1, 00:19:07.099 "bdev_name": "Malloc2", 00:19:07.099 "name": "Malloc2", 00:19:07.099 "nguid": "0FEDE46855364226898231D74FAA6FF5", 00:19:07.099 "uuid": "0fede468-5536-4226-8982-31d74faa6ff5" 00:19:07.099 }, 00:19:07.099 { 00:19:07.099 "nsid": 2, 00:19:07.099 "bdev_name": "Malloc4", 00:19:07.099 "name": "Malloc4", 00:19:07.099 "nguid": "3665D540F2F04607ACC4C455AEAF24B3", 00:19:07.099 "uuid": "3665d540-f2f0-4607-acc4-c455aeaf24b3" 00:19:07.099 } 00:19:07.099 ] 00:19:07.099 } 00:19:07.099 ] 00:19:07.099 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 234163 00:19:07.100 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:07.100 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 228434 00:19:07.100 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 228434 ']' 00:19:07.100 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 228434 00:19:07.100 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:07.100 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.100 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 228434 00:19:07.100 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.100 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.100 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 228434' 00:19:07.100 killing process with pid 228434 00:19:07.100 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 228434 00:19:07.100 20:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 228434 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=234309 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 234309' 00:19:07.367 Process pid: 234309 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 234309 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 234309 ']' 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.367 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:07.367 [2024-11-18 20:20:19.331617] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:07.367 [2024-11-18 20:20:19.332645] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:19:07.367 [2024-11-18 20:20:19.332714] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.625 [2024-11-18 20:20:19.401856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:07.625 [2024-11-18 20:20:19.448067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.625 [2024-11-18 20:20:19.448118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.625 [2024-11-18 20:20:19.448146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.625 [2024-11-18 20:20:19.448158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.625 [2024-11-18 20:20:19.448168] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.625 [2024-11-18 20:20:19.449606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.625 [2024-11-18 20:20:19.449664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.625 [2024-11-18 20:20:19.449735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:07.625 [2024-11-18 20:20:19.453655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.625 [2024-11-18 20:20:19.539910] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:07.625 [2024-11-18 20:20:19.540132] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:07.625 [2024-11-18 20:20:19.540438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:07.625 [2024-11-18 20:20:19.541048] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:07.625 [2024-11-18 20:20:19.541273] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:07.625 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.625 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:07.625 20:20:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:09.008 20:20:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:09.008 20:20:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:09.008 20:20:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:09.008 20:20:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:09.008 20:20:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:09.008 20:20:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:09.266 Malloc1 00:19:09.266 20:20:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:09.524 20:20:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:09.783 20:20:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:10.041 20:20:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:10.041 20:20:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:10.041 20:20:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:10.299 Malloc2 00:19:10.299 20:20:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:10.868 20:20:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:10.868 20:20:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:11.127 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:11.127 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 234309 00:19:11.127 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 234309 ']' 00:19:11.127 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 234309 00:19:11.127 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:11.127 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.127 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 234309 00:19:11.385 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.385 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.385 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 234309' 00:19:11.385 killing process with pid 234309 00:19:11.385 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 234309 00:19:11.385 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 234309 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:11.645 00:19:11.645 real 0m54.378s 00:19:11.645 user 3m30.546s 00:19:11.645 sys 0m3.945s 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:11.645 ************************************ 00:19:11.645 END TEST nvmf_vfio_user 00:19:11.645 ************************************ 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:11.645 ************************************ 00:19:11.645 START TEST nvmf_vfio_user_nvme_compliance 00:19:11.645 ************************************ 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:11.645 * Looking for test storage... 00:19:11.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:11.645 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:11.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.646 --rc genhtml_branch_coverage=1 00:19:11.646 --rc genhtml_function_coverage=1 00:19:11.646 --rc genhtml_legend=1 00:19:11.646 --rc geninfo_all_blocks=1 00:19:11.646 --rc geninfo_unexecuted_blocks=1 00:19:11.646 00:19:11.646 ' 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:11.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.646 --rc genhtml_branch_coverage=1 00:19:11.646 --rc genhtml_function_coverage=1 00:19:11.646 --rc genhtml_legend=1 00:19:11.646 --rc geninfo_all_blocks=1 00:19:11.646 --rc geninfo_unexecuted_blocks=1 00:19:11.646 00:19:11.646 ' 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:11.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.646 --rc genhtml_branch_coverage=1 00:19:11.646 --rc genhtml_function_coverage=1 00:19:11.646 --rc genhtml_legend=1 00:19:11.646 --rc geninfo_all_blocks=1 00:19:11.646 --rc geninfo_unexecuted_blocks=1 00:19:11.646 00:19:11.646 ' 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:11.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.646 --rc genhtml_branch_coverage=1 00:19:11.646 --rc genhtml_function_coverage=1 00:19:11.646 --rc genhtml_legend=1 00:19:11.646 --rc geninfo_all_blocks=1 00:19:11.646 --rc geninfo_unexecuted_blocks=1 00:19:11.646 00:19:11.646 ' 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:11.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=234915 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 234915' 00:19:11.646 Process pid: 234915 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 234915 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 234915 ']' 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.646 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:11.908 [2024-11-18 20:20:23.684740] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:19:11.908 [2024-11-18 20:20:23.684819] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.908 [2024-11-18 20:20:23.751759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:11.908 [2024-11-18 20:20:23.794429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.908 [2024-11-18 20:20:23.794484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.908 [2024-11-18 20:20:23.794513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.908 [2024-11-18 20:20:23.794525] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.908 [2024-11-18 20:20:23.794534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.908 [2024-11-18 20:20:23.795860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.908 [2024-11-18 20:20:23.795921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.908 [2024-11-18 20:20:23.795924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.167 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.167 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:12.167 20:20:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:13.106 malloc0 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.106 20:20:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:13.366 00:19:13.366 00:19:13.366 CUnit - A unit testing framework for C - Version 2.1-3 00:19:13.366 http://cunit.sourceforge.net/ 00:19:13.366 00:19:13.366 00:19:13.366 Suite: nvme_compliance 00:19:13.366 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-18 20:20:25.170229] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:13.366 [2024-11-18 20:20:25.171732] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:13.366 [2024-11-18 20:20:25.171759] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:13.366 [2024-11-18 20:20:25.171773] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:13.366 [2024-11-18 20:20:25.173243] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:13.366 passed 00:19:13.366 Test: admin_identify_ctrlr_verify_fused ...[2024-11-18 20:20:25.262878] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:13.366 [2024-11-18 20:20:25.265898] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:13.366 passed 00:19:13.366 Test: admin_identify_ns ...[2024-11-18 20:20:25.355744] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:13.626 [2024-11-18 20:20:25.419669] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:13.626 [2024-11-18 20:20:25.427651] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:13.626 [2024-11-18 20:20:25.448816] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:13.626 passed 00:19:13.626 Test: admin_get_features_mandatory_features ...[2024-11-18 20:20:25.532483] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:13.626 [2024-11-18 20:20:25.535504] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:13.626 passed 00:19:13.626 Test: admin_get_features_optional_features ...[2024-11-18 20:20:25.621073] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:13.626 [2024-11-18 20:20:25.624098] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:13.885 passed 00:19:13.885 Test: admin_set_features_number_of_queues ...[2024-11-18 20:20:25.711144] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:13.885 [2024-11-18 20:20:25.814757] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:13.885 passed 00:19:14.143 Test: admin_get_log_page_mandatory_logs ...[2024-11-18 20:20:25.898496] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.143 [2024-11-18 20:20:25.901519] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.143 passed 00:19:14.143 Test: admin_get_log_page_with_lpo ...[2024-11-18 20:20:25.986433] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.143 [2024-11-18 20:20:26.053655] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:14.143 [2024-11-18 20:20:26.066733] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.143 passed 00:19:14.403 Test: fabric_property_get ...[2024-11-18 20:20:26.151380] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.403 [2024-11-18 20:20:26.152734] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:14.403 [2024-11-18 20:20:26.154401] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.403 passed 00:19:14.403 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-18 20:20:26.236956] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.403 [2024-11-18 20:20:26.238238] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:14.403 [2024-11-18 20:20:26.239970] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.403 passed 00:19:14.403 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-18 20:20:26.326191] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.403 [2024-11-18 20:20:26.410680] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:14.663 [2024-11-18 20:20:26.426661] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:14.663 [2024-11-18 20:20:26.431766] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.663 passed 00:19:14.663 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-18 20:20:26.515666] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.663 [2024-11-18 20:20:26.516985] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:14.663 [2024-11-18 20:20:26.518700] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.663 passed 00:19:14.663 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-18 20:20:26.599889] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.922 [2024-11-18 20:20:26.679660] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:14.922 [2024-11-18 20:20:26.702649] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:14.922 [2024-11-18 20:20:26.707753] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.922 passed 00:19:14.922 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-18 20:20:26.792313] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.922 [2024-11-18 20:20:26.793590] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:14.922 [2024-11-18 20:20:26.793650] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:14.922 [2024-11-18 20:20:26.795336] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.922 passed 00:19:14.922 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-18 20:20:26.876719] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:15.181 [2024-11-18 20:20:26.969661] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:15.181 [2024-11-18 20:20:26.977661] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:15.181 [2024-11-18 20:20:26.985650] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:15.181 [2024-11-18 20:20:26.993645] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:15.181 [2024-11-18 20:20:27.022743] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:15.181 passed 00:19:15.181 Test: admin_create_io_sq_verify_pc ...[2024-11-18 20:20:27.104950] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:15.181 [2024-11-18 20:20:27.121672] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:15.181 [2024-11-18 20:20:27.139316] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:15.181 passed 00:19:15.440 Test: admin_create_io_qp_max_qps ...[2024-11-18 20:20:27.228915] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.378 [2024-11-18 20:20:28.334657] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:16.946 [2024-11-18 20:20:28.721252] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.946 passed 00:19:16.946 Test: admin_create_io_sq_shared_cq ...[2024-11-18 20:20:28.804754] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.946 [2024-11-18 20:20:28.938648] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:17.205 [2024-11-18 20:20:28.975749] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:17.205 passed 00:19:17.205 00:19:17.205 Run Summary: Type Total Ran Passed Failed Inactive 00:19:17.205 suites 1 1 n/a 0 0 00:19:17.205 tests 18 18 18 0 0 00:19:17.205 asserts 360 360 360 0 n/a 00:19:17.205 00:19:17.205 Elapsed time = 1.579 seconds 00:19:17.205 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 234915 00:19:17.205 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 234915 ']' 00:19:17.205 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 234915 00:19:17.205 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:17.205 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.205 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 234915 00:19:17.205 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.205 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.205 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 234915' 00:19:17.205 killing process with pid 234915 00:19:17.205 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 234915 00:19:17.205 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 234915 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:17.464 00:19:17.464 real 0m5.804s 00:19:17.464 user 0m16.314s 00:19:17.464 sys 0m0.539s 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:17.464 ************************************ 00:19:17.464 END TEST nvmf_vfio_user_nvme_compliance 00:19:17.464 ************************************ 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:17.464 ************************************ 00:19:17.464 START TEST nvmf_vfio_user_fuzz 00:19:17.464 ************************************ 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:17.464 * Looking for test storage... 00:19:17.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:17.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.464 --rc genhtml_branch_coverage=1 00:19:17.464 --rc genhtml_function_coverage=1 00:19:17.464 --rc genhtml_legend=1 00:19:17.464 --rc geninfo_all_blocks=1 00:19:17.464 --rc geninfo_unexecuted_blocks=1 00:19:17.464 00:19:17.464 ' 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:17.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.464 --rc genhtml_branch_coverage=1 00:19:17.464 --rc genhtml_function_coverage=1 00:19:17.464 --rc genhtml_legend=1 00:19:17.464 --rc geninfo_all_blocks=1 00:19:17.464 --rc geninfo_unexecuted_blocks=1 00:19:17.464 00:19:17.464 ' 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:17.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.464 --rc genhtml_branch_coverage=1 00:19:17.464 --rc genhtml_function_coverage=1 00:19:17.464 --rc genhtml_legend=1 00:19:17.464 --rc geninfo_all_blocks=1 00:19:17.464 --rc geninfo_unexecuted_blocks=1 00:19:17.464 00:19:17.464 ' 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:17.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.464 --rc genhtml_branch_coverage=1 00:19:17.464 --rc genhtml_function_coverage=1 00:19:17.464 --rc genhtml_legend=1 00:19:17.464 --rc geninfo_all_blocks=1 00:19:17.464 --rc geninfo_unexecuted_blocks=1 00:19:17.464 00:19:17.464 ' 00:19:17.464 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=235641 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 235641' 00:19:17.465 Process pid: 235641 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 235641 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 235641 ']' 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.465 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:18.033 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.033 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:18.033 20:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:18.976 malloc0 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:18.976 20:20:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:51.058 Fuzzing completed. Shutting down the fuzz application 00:19:51.058 00:19:51.058 Dumping successful admin opcodes: 00:19:51.058 8, 9, 10, 24, 00:19:51.058 Dumping successful io opcodes: 00:19:51.058 0, 00:19:51.058 NS: 0x20000081ef00 I/O qp, Total commands completed: 661611, total successful commands: 2581, random_seed: 4231642880 00:19:51.058 NS: 0x20000081ef00 admin qp, Total commands completed: 86243, total successful commands: 690, random_seed: 2380810240 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 235641 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 235641 ']' 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 235641 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235641 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235641' 00:19:51.058 killing process with pid 235641 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 235641 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 235641 00:19:51.058 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:51.059 00:19:51.059 real 0m32.134s 00:19:51.059 user 0m30.373s 00:19:51.059 sys 0m29.457s 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:51.059 ************************************ 00:19:51.059 END TEST nvmf_vfio_user_fuzz 00:19:51.059 ************************************ 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:51.059 ************************************ 00:19:51.059 START TEST nvmf_auth_target 00:19:51.059 ************************************ 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:51.059 * Looking for test storage... 00:19:51.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:51.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.059 --rc genhtml_branch_coverage=1 00:19:51.059 --rc genhtml_function_coverage=1 00:19:51.059 --rc genhtml_legend=1 00:19:51.059 --rc geninfo_all_blocks=1 00:19:51.059 --rc geninfo_unexecuted_blocks=1 00:19:51.059 00:19:51.059 ' 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:51.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.059 --rc genhtml_branch_coverage=1 00:19:51.059 --rc genhtml_function_coverage=1 00:19:51.059 --rc genhtml_legend=1 00:19:51.059 --rc geninfo_all_blocks=1 00:19:51.059 --rc geninfo_unexecuted_blocks=1 00:19:51.059 00:19:51.059 ' 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:51.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.059 --rc genhtml_branch_coverage=1 00:19:51.059 --rc genhtml_function_coverage=1 00:19:51.059 --rc genhtml_legend=1 00:19:51.059 --rc geninfo_all_blocks=1 00:19:51.059 --rc geninfo_unexecuted_blocks=1 00:19:51.059 00:19:51.059 ' 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:51.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.059 --rc genhtml_branch_coverage=1 00:19:51.059 --rc genhtml_function_coverage=1 00:19:51.059 --rc genhtml_legend=1 00:19:51.059 --rc geninfo_all_blocks=1 00:19:51.059 --rc geninfo_unexecuted_blocks=1 00:19:51.059 00:19:51.059 ' 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.059 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:51.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:51.060 20:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:51.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.998 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:51.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:51.999 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:51.999 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:51.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:19:51.999 00:19:51.999 --- 10.0.0.2 ping statistics --- 00:19:51.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.999 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:19:51.999 00:19:51.999 --- 10.0.0.1 ping statistics --- 00:19:51.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.999 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=241196 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 241196 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 241196 ']' 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.999 20:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=241223 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f79a2367069446b215d1b9a194187ee60ca1e35ab38b9fc7 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cjR 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f79a2367069446b215d1b9a194187ee60ca1e35ab38b9fc7 0 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f79a2367069446b215d1b9a194187ee60ca1e35ab38b9fc7 0 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f79a2367069446b215d1b9a194187ee60ca1e35ab38b9fc7 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:52.258 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cjR 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cjR 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.cjR 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=13dd7d9d0fddbede492171a7fac348af2abd6789c0a67ba067e9df122aa60222 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.RQc 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 13dd7d9d0fddbede492171a7fac348af2abd6789c0a67ba067e9df122aa60222 3 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 13dd7d9d0fddbede492171a7fac348af2abd6789c0a67ba067e9df122aa60222 3 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=13dd7d9d0fddbede492171a7fac348af2abd6789c0a67ba067e9df122aa60222 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.RQc 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.RQc 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.RQc 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4024e9f84dec3925e948bddadc9d4ca0 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.417 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4024e9f84dec3925e948bddadc9d4ca0 1 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4024e9f84dec3925e948bddadc9d4ca0 1 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4024e9f84dec3925e948bddadc9d4ca0 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.417 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.417 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.417 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f92beaaf553f2721a22288f6424226afdebe97e123c3152b 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Pdb 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f92beaaf553f2721a22288f6424226afdebe97e123c3152b 2 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f92beaaf553f2721a22288f6424226afdebe97e123c3152b 2 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f92beaaf553f2721a22288f6424226afdebe97e123c3152b 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Pdb 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Pdb 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Pdb 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:52.518 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=950b70e0e202f0d0f26757da1104c51cab406ecd6740a1a3 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.wSj 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 950b70e0e202f0d0f26757da1104c51cab406ecd6740a1a3 2 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 950b70e0e202f0d0f26757da1104c51cab406ecd6740a1a3 2 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=950b70e0e202f0d0f26757da1104c51cab406ecd6740a1a3 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.wSj 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.wSj 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.wSj 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f37b52da84ab2ed55926390367122b7f 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.t6y 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f37b52da84ab2ed55926390367122b7f 1 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f37b52da84ab2ed55926390367122b7f 1 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f37b52da84ab2ed55926390367122b7f 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:52.519 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.t6y 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.t6y 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.t6y 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9ef7243e8a64841bb188b5e3dadf2db490a6f17f7d20c431a96b3e6d0f7ec6ca 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.iCE 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9ef7243e8a64841bb188b5e3dadf2db490a6f17f7d20c431a96b3e6d0f7ec6ca 3 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9ef7243e8a64841bb188b5e3dadf2db490a6f17f7d20c431a96b3e6d0f7ec6ca 3 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9ef7243e8a64841bb188b5e3dadf2db490a6f17f7d20c431a96b3e6d0f7ec6ca 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.iCE 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.iCE 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.iCE 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 241196 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 241196 ']' 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.778 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.036 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.036 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:53.036 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 241223 /var/tmp/host.sock 00:19:53.036 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 241223 ']' 00:19:53.036 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:53.036 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.036 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:53.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:53.036 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.036 20:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.295 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.295 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:53.295 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:53.295 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.295 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.295 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.295 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:53.295 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cjR 00:19:53.295 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.295 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.295 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.295 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.cjR 00:19:53.295 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.cjR 00:19:53.553 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.RQc ]] 00:19:53.553 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RQc 00:19:53.553 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.553 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.553 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.553 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RQc 00:19:53.553 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RQc 00:19:53.811 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:53.811 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.417 00:19:53.811 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.811 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.811 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.811 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.417 00:19:53.811 20:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.417 00:19:54.071 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Pdb ]] 00:19:54.071 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pdb 00:19:54.071 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.071 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.071 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.071 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pdb 00:19:54.071 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pdb 00:19:54.330 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:54.330 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wSj 00:19:54.330 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.330 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.330 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.330 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.wSj 00:19:54.330 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.wSj 00:19:54.588 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.t6y ]] 00:19:54.588 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t6y 00:19:54.588 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.588 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.588 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.588 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t6y 00:19:54.588 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t6y 00:19:55.155 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:55.155 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.iCE 00:19:55.155 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.155 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.155 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.155 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.iCE 00:19:55.155 20:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.iCE 00:19:55.155 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:55.155 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:55.155 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.155 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.155 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:55.155 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:55.721 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:55.721 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.721 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.721 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:55.721 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.721 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.721 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.721 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.721 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.721 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.721 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.721 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.721 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.978 00:19:55.978 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.978 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.978 20:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.238 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.238 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.238 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.238 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.238 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.238 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.238 { 00:19:56.238 "cntlid": 1, 00:19:56.238 "qid": 0, 00:19:56.238 "state": "enabled", 00:19:56.238 "thread": "nvmf_tgt_poll_group_000", 00:19:56.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:56.238 "listen_address": { 00:19:56.238 "trtype": "TCP", 00:19:56.238 "adrfam": "IPv4", 00:19:56.238 "traddr": "10.0.0.2", 00:19:56.238 "trsvcid": "4420" 00:19:56.238 }, 00:19:56.238 "peer_address": { 00:19:56.238 "trtype": "TCP", 00:19:56.238 "adrfam": "IPv4", 00:19:56.238 "traddr": "10.0.0.1", 00:19:56.238 "trsvcid": "39890" 00:19:56.238 }, 00:19:56.238 "auth": { 00:19:56.238 "state": "completed", 00:19:56.238 "digest": "sha256", 00:19:56.238 "dhgroup": "null" 00:19:56.238 } 00:19:56.238 } 00:19:56.238 ]' 00:19:56.238 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.238 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.238 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.238 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:56.238 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.238 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.238 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.238 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.809 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:19:56.809 20:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.079 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.079 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.080 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.080 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.080 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.080 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.080 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.080 { 00:20:02.080 "cntlid": 3, 00:20:02.080 "qid": 0, 00:20:02.080 "state": "enabled", 00:20:02.080 "thread": "nvmf_tgt_poll_group_000", 00:20:02.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:02.080 "listen_address": { 00:20:02.080 "trtype": "TCP", 00:20:02.080 "adrfam": "IPv4", 00:20:02.080 "traddr": "10.0.0.2", 00:20:02.080 "trsvcid": "4420" 00:20:02.080 }, 00:20:02.080 "peer_address": { 00:20:02.080 "trtype": "TCP", 00:20:02.080 "adrfam": "IPv4", 00:20:02.080 "traddr": "10.0.0.1", 00:20:02.080 "trsvcid": "33640" 00:20:02.080 }, 00:20:02.080 "auth": { 00:20:02.080 "state": "completed", 00:20:02.080 "digest": "sha256", 00:20:02.080 "dhgroup": "null" 00:20:02.080 } 00:20:02.080 } 00:20:02.080 ]' 00:20:02.080 20:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.080 20:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.080 20:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.080 20:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:02.080 20:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.338 20:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.338 20:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.338 20:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.598 20:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:20:02.598 20:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:20:03.536 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.536 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.536 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.536 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.536 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.536 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.536 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:03.536 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:03.794 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:03.794 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.794 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.794 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:03.794 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:03.794 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.794 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.794 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.794 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.794 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.794 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.794 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.794 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.052 00:20:04.053 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.053 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.053 20:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.312 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.312 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.312 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.312 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.312 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.312 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.312 { 00:20:04.312 "cntlid": 5, 00:20:04.312 "qid": 0, 00:20:04.312 "state": "enabled", 00:20:04.312 "thread": "nvmf_tgt_poll_group_000", 00:20:04.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:04.312 "listen_address": { 00:20:04.312 "trtype": "TCP", 00:20:04.312 "adrfam": "IPv4", 00:20:04.312 "traddr": "10.0.0.2", 00:20:04.312 "trsvcid": "4420" 00:20:04.312 }, 00:20:04.312 "peer_address": { 00:20:04.312 "trtype": "TCP", 00:20:04.312 "adrfam": "IPv4", 00:20:04.312 "traddr": "10.0.0.1", 00:20:04.312 "trsvcid": "33664" 00:20:04.312 }, 00:20:04.312 "auth": { 00:20:04.312 "state": "completed", 00:20:04.312 "digest": "sha256", 00:20:04.312 "dhgroup": "null" 00:20:04.312 } 00:20:04.312 } 00:20:04.312 ]' 00:20:04.312 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.312 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.312 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.312 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:04.312 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.312 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.312 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.312 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.571 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:20:04.571 20:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:20:05.505 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.505 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.505 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.505 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.505 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.505 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.505 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:05.505 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:05.764 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:05.764 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.764 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.764 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:05.764 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:05.764 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.764 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:05.764 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.764 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.764 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.764 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:05.764 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.764 20:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.330 00:20:06.330 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.330 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.330 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.330 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.331 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.331 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.331 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.331 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.589 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.589 { 00:20:06.589 "cntlid": 7, 00:20:06.589 "qid": 0, 00:20:06.589 "state": "enabled", 00:20:06.589 "thread": "nvmf_tgt_poll_group_000", 00:20:06.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:06.589 "listen_address": { 00:20:06.589 "trtype": "TCP", 00:20:06.589 "adrfam": "IPv4", 00:20:06.589 "traddr": "10.0.0.2", 00:20:06.589 "trsvcid": "4420" 00:20:06.589 }, 00:20:06.589 "peer_address": { 00:20:06.589 "trtype": "TCP", 00:20:06.589 "adrfam": "IPv4", 00:20:06.589 "traddr": "10.0.0.1", 00:20:06.589 "trsvcid": "33698" 00:20:06.589 }, 00:20:06.589 "auth": { 00:20:06.589 "state": "completed", 00:20:06.589 "digest": "sha256", 00:20:06.589 "dhgroup": "null" 00:20:06.589 } 00:20:06.589 } 00:20:06.589 ]' 00:20:06.589 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.589 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.589 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.589 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:06.589 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.589 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.589 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.589 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.847 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:20:06.847 20:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:20:07.783 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.783 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.783 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.783 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.783 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.783 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.783 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.783 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:07.783 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:08.041 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:08.041 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.041 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.041 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:08.041 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:08.041 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.041 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.041 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.041 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.041 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.041 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.041 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.041 20:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.300 00:20:08.300 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.300 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.300 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.558 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.558 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.558 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.558 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.558 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.558 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.558 { 00:20:08.558 "cntlid": 9, 00:20:08.558 "qid": 0, 00:20:08.558 "state": "enabled", 00:20:08.558 "thread": "nvmf_tgt_poll_group_000", 00:20:08.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:08.558 "listen_address": { 00:20:08.558 "trtype": "TCP", 00:20:08.558 "adrfam": "IPv4", 00:20:08.558 "traddr": "10.0.0.2", 00:20:08.558 "trsvcid": "4420" 00:20:08.558 }, 00:20:08.558 "peer_address": { 00:20:08.558 "trtype": "TCP", 00:20:08.558 "adrfam": "IPv4", 00:20:08.558 "traddr": "10.0.0.1", 00:20:08.558 "trsvcid": "40074" 00:20:08.558 }, 00:20:08.558 "auth": { 00:20:08.558 "state": "completed", 00:20:08.558 "digest": "sha256", 00:20:08.558 "dhgroup": "ffdhe2048" 00:20:08.558 } 00:20:08.558 } 00:20:08.558 ]' 00:20:08.558 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.817 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.817 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.817 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:08.817 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.817 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.817 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.817 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.076 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:20:09.076 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:20:10.017 20:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.017 20:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.017 20:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.017 20:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.017 20:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.017 20:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.017 20:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:10.017 20:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:10.276 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:10.276 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.276 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.276 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:10.276 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:10.276 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.276 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.276 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.276 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.276 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.276 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.276 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.276 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.533 00:20:10.533 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.533 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.533 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.792 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.792 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.792 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.792 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.792 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.792 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.792 { 00:20:10.792 "cntlid": 11, 00:20:10.792 "qid": 0, 00:20:10.792 "state": "enabled", 00:20:10.792 "thread": "nvmf_tgt_poll_group_000", 00:20:10.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.792 "listen_address": { 00:20:10.792 "trtype": "TCP", 00:20:10.792 "adrfam": "IPv4", 00:20:10.792 "traddr": "10.0.0.2", 00:20:10.792 "trsvcid": "4420" 00:20:10.792 }, 00:20:10.792 "peer_address": { 00:20:10.792 "trtype": "TCP", 00:20:10.792 "adrfam": "IPv4", 00:20:10.792 "traddr": "10.0.0.1", 00:20:10.792 "trsvcid": "40112" 00:20:10.792 }, 00:20:10.792 "auth": { 00:20:10.792 "state": "completed", 00:20:10.792 "digest": "sha256", 00:20:10.792 "dhgroup": "ffdhe2048" 00:20:10.792 } 00:20:10.792 } 00:20:10.792 ]' 00:20:10.792 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.051 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.051 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.051 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:11.051 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.051 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.051 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.051 20:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.310 20:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:20:11.310 20:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:20:12.247 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.247 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.247 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.247 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.247 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.247 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.247 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.247 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.506 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:12.506 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.506 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.506 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:12.506 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:12.506 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.506 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.506 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.506 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.506 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.506 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.506 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.506 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.765 00:20:12.765 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.765 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.765 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.023 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.023 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.023 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.023 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.023 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.023 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.023 { 00:20:13.023 "cntlid": 13, 00:20:13.023 "qid": 0, 00:20:13.023 "state": "enabled", 00:20:13.023 "thread": "nvmf_tgt_poll_group_000", 00:20:13.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:13.023 "listen_address": { 00:20:13.023 "trtype": "TCP", 00:20:13.023 "adrfam": "IPv4", 00:20:13.023 "traddr": "10.0.0.2", 00:20:13.023 "trsvcid": "4420" 00:20:13.023 }, 00:20:13.023 "peer_address": { 00:20:13.023 "trtype": "TCP", 00:20:13.023 "adrfam": "IPv4", 00:20:13.023 "traddr": "10.0.0.1", 00:20:13.023 "trsvcid": "40138" 00:20:13.023 }, 00:20:13.023 "auth": { 00:20:13.023 "state": "completed", 00:20:13.023 "digest": "sha256", 00:20:13.023 "dhgroup": "ffdhe2048" 00:20:13.023 } 00:20:13.023 } 00:20:13.023 ]' 00:20:13.023 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.024 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.024 20:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.024 20:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:13.024 20:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.282 20:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.282 20:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.282 20:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.541 20:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:20:13.541 20:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:20:14.480 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.480 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.480 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.480 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.480 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.480 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.480 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.480 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.739 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:14.739 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.739 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.739 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:14.739 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:14.739 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.739 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:14.739 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.739 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.739 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.739 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:14.739 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.739 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.997 00:20:14.998 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.998 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.998 20:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.256 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.256 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.256 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.256 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.256 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.256 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.256 { 00:20:15.256 "cntlid": 15, 00:20:15.256 "qid": 0, 00:20:15.256 "state": "enabled", 00:20:15.256 "thread": "nvmf_tgt_poll_group_000", 00:20:15.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:15.256 "listen_address": { 00:20:15.256 "trtype": "TCP", 00:20:15.256 "adrfam": "IPv4", 00:20:15.256 "traddr": "10.0.0.2", 00:20:15.256 "trsvcid": "4420" 00:20:15.256 }, 00:20:15.256 "peer_address": { 00:20:15.256 "trtype": "TCP", 00:20:15.256 "adrfam": "IPv4", 00:20:15.256 "traddr": "10.0.0.1", 00:20:15.256 "trsvcid": "40164" 00:20:15.256 }, 00:20:15.256 "auth": { 00:20:15.256 "state": "completed", 00:20:15.256 "digest": "sha256", 00:20:15.256 "dhgroup": "ffdhe2048" 00:20:15.256 } 00:20:15.256 } 00:20:15.256 ]' 00:20:15.256 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.256 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.256 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.256 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.256 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.256 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.256 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.256 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.827 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:20:15.827 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.766 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.334 00:20:17.334 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.334 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.334 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.593 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.593 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.593 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.593 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.593 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.593 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.593 { 00:20:17.593 "cntlid": 17, 00:20:17.593 "qid": 0, 00:20:17.593 "state": "enabled", 00:20:17.593 "thread": "nvmf_tgt_poll_group_000", 00:20:17.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.593 "listen_address": { 00:20:17.593 "trtype": "TCP", 00:20:17.593 "adrfam": "IPv4", 00:20:17.593 "traddr": "10.0.0.2", 00:20:17.593 "trsvcid": "4420" 00:20:17.593 }, 00:20:17.593 "peer_address": { 00:20:17.593 "trtype": "TCP", 00:20:17.593 "adrfam": "IPv4", 00:20:17.593 "traddr": "10.0.0.1", 00:20:17.593 "trsvcid": "40190" 00:20:17.593 }, 00:20:17.593 "auth": { 00:20:17.593 "state": "completed", 00:20:17.593 "digest": "sha256", 00:20:17.593 "dhgroup": "ffdhe3072" 00:20:17.593 } 00:20:17.593 } 00:20:17.593 ]' 00:20:17.593 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.593 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.593 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.593 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.593 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.593 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.593 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.593 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.850 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:20:17.850 20:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:20:18.788 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.788 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.788 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.788 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.788 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.788 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.788 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.788 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:19.046 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:19.046 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.046 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.046 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:19.046 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:19.046 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.046 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.046 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.046 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.046 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.046 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.046 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.047 20:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.306 00:20:19.306 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.306 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.306 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.564 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.564 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.564 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.564 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.564 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.565 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.565 { 00:20:19.565 "cntlid": 19, 00:20:19.565 "qid": 0, 00:20:19.565 "state": "enabled", 00:20:19.565 "thread": "nvmf_tgt_poll_group_000", 00:20:19.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:19.565 "listen_address": { 00:20:19.565 "trtype": "TCP", 00:20:19.565 "adrfam": "IPv4", 00:20:19.565 "traddr": "10.0.0.2", 00:20:19.565 "trsvcid": "4420" 00:20:19.565 }, 00:20:19.565 "peer_address": { 00:20:19.565 "trtype": "TCP", 00:20:19.565 "adrfam": "IPv4", 00:20:19.565 "traddr": "10.0.0.1", 00:20:19.565 "trsvcid": "39892" 00:20:19.565 }, 00:20:19.565 "auth": { 00:20:19.565 "state": "completed", 00:20:19.565 "digest": "sha256", 00:20:19.565 "dhgroup": "ffdhe3072" 00:20:19.565 } 00:20:19.565 } 00:20:19.565 ]' 00:20:19.565 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.823 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.824 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.824 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.824 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.824 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.824 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.824 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.082 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:20:20.082 20:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:20:21.018 20:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.018 20:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.018 20:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.018 20:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.018 20:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.018 20:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.018 20:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:21.018 20:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:21.276 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:21.276 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.276 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.276 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:21.276 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:21.276 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.276 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.276 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.276 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.276 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.276 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.277 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.277 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.535 00:20:21.535 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.535 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.535 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.793 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.793 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.793 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.793 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.793 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.793 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.793 { 00:20:21.793 "cntlid": 21, 00:20:21.793 "qid": 0, 00:20:21.793 "state": "enabled", 00:20:21.793 "thread": "nvmf_tgt_poll_group_000", 00:20:21.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:21.793 "listen_address": { 00:20:21.793 "trtype": "TCP", 00:20:21.793 "adrfam": "IPv4", 00:20:21.793 "traddr": "10.0.0.2", 00:20:21.793 "trsvcid": "4420" 00:20:21.793 }, 00:20:21.793 "peer_address": { 00:20:21.793 "trtype": "TCP", 00:20:21.793 "adrfam": "IPv4", 00:20:21.793 "traddr": "10.0.0.1", 00:20:21.793 "trsvcid": "39928" 00:20:21.793 }, 00:20:21.793 "auth": { 00:20:21.793 "state": "completed", 00:20:21.793 "digest": "sha256", 00:20:21.793 "dhgroup": "ffdhe3072" 00:20:21.793 } 00:20:21.793 } 00:20:21.793 ]' 00:20:21.793 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.793 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.793 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.051 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:22.051 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.051 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.051 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.051 20:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.309 20:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:20:22.309 20:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:20:23.247 20:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.247 20:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.247 20:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.247 20:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.247 20:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.247 20:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.247 20:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:23.247 20:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:23.507 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:23.507 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.507 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:23.507 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:23.507 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:23.507 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.507 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:23.507 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.507 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.507 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.507 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:23.507 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.507 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.766 00:20:23.766 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.766 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.766 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.024 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.024 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.024 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.024 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.024 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.025 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.025 { 00:20:24.025 "cntlid": 23, 00:20:24.025 "qid": 0, 00:20:24.025 "state": "enabled", 00:20:24.025 "thread": "nvmf_tgt_poll_group_000", 00:20:24.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.025 "listen_address": { 00:20:24.025 "trtype": "TCP", 00:20:24.025 "adrfam": "IPv4", 00:20:24.025 "traddr": "10.0.0.2", 00:20:24.025 "trsvcid": "4420" 00:20:24.025 }, 00:20:24.025 "peer_address": { 00:20:24.025 "trtype": "TCP", 00:20:24.025 "adrfam": "IPv4", 00:20:24.025 "traddr": "10.0.0.1", 00:20:24.025 "trsvcid": "39946" 00:20:24.025 }, 00:20:24.025 "auth": { 00:20:24.025 "state": "completed", 00:20:24.025 "digest": "sha256", 00:20:24.025 "dhgroup": "ffdhe3072" 00:20:24.025 } 00:20:24.025 } 00:20:24.025 ]' 00:20:24.025 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.025 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.025 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.025 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:24.025 20:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.025 20:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.025 20:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.025 20:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.596 20:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:20:24.596 20:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.535 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.536 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.536 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.536 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.536 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.102 00:20:26.102 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.102 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.102 20:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.361 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.361 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.361 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.361 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.361 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.361 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.361 { 00:20:26.361 "cntlid": 25, 00:20:26.361 "qid": 0, 00:20:26.361 "state": "enabled", 00:20:26.361 "thread": "nvmf_tgt_poll_group_000", 00:20:26.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:26.361 "listen_address": { 00:20:26.361 "trtype": "TCP", 00:20:26.361 "adrfam": "IPv4", 00:20:26.361 "traddr": "10.0.0.2", 00:20:26.361 "trsvcid": "4420" 00:20:26.361 }, 00:20:26.361 "peer_address": { 00:20:26.361 "trtype": "TCP", 00:20:26.361 "adrfam": "IPv4", 00:20:26.361 "traddr": "10.0.0.1", 00:20:26.361 "trsvcid": "39968" 00:20:26.361 }, 00:20:26.361 "auth": { 00:20:26.361 "state": "completed", 00:20:26.361 "digest": "sha256", 00:20:26.361 "dhgroup": "ffdhe4096" 00:20:26.361 } 00:20:26.361 } 00:20:26.361 ]' 00:20:26.361 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.361 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.361 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.361 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:26.361 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.361 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.361 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.361 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.619 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:20:26.619 20:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:20:27.561 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.561 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.561 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.561 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.561 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.561 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.561 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:27.561 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:27.820 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:27.820 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.820 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.820 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:27.820 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:27.820 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.820 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.820 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.820 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.820 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.820 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.820 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.820 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.392 00:20:28.392 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.392 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.392 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.651 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.651 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.651 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.651 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.651 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.651 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.651 { 00:20:28.651 "cntlid": 27, 00:20:28.651 "qid": 0, 00:20:28.651 "state": "enabled", 00:20:28.651 "thread": "nvmf_tgt_poll_group_000", 00:20:28.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:28.651 "listen_address": { 00:20:28.651 "trtype": "TCP", 00:20:28.651 "adrfam": "IPv4", 00:20:28.651 "traddr": "10.0.0.2", 00:20:28.651 "trsvcid": "4420" 00:20:28.651 }, 00:20:28.651 "peer_address": { 00:20:28.651 "trtype": "TCP", 00:20:28.651 "adrfam": "IPv4", 00:20:28.651 "traddr": "10.0.0.1", 00:20:28.651 "trsvcid": "51088" 00:20:28.651 }, 00:20:28.651 "auth": { 00:20:28.651 "state": "completed", 00:20:28.651 "digest": "sha256", 00:20:28.651 "dhgroup": "ffdhe4096" 00:20:28.651 } 00:20:28.651 } 00:20:28.651 ]' 00:20:28.651 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.651 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.651 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.651 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:28.651 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.651 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.651 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.651 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.910 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:20:28.910 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:20:29.862 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.862 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.862 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.862 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.862 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.862 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.862 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:29.863 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:30.121 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:30.121 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.121 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.121 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:30.121 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:30.121 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.121 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.121 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.121 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.121 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.121 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.121 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.121 20:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.380 00:20:30.380 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.380 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.380 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.948 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.948 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.948 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.948 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.948 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.948 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.948 { 00:20:30.948 "cntlid": 29, 00:20:30.948 "qid": 0, 00:20:30.948 "state": "enabled", 00:20:30.948 "thread": "nvmf_tgt_poll_group_000", 00:20:30.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:30.948 "listen_address": { 00:20:30.948 "trtype": "TCP", 00:20:30.948 "adrfam": "IPv4", 00:20:30.948 "traddr": "10.0.0.2", 00:20:30.948 "trsvcid": "4420" 00:20:30.948 }, 00:20:30.948 "peer_address": { 00:20:30.948 "trtype": "TCP", 00:20:30.948 "adrfam": "IPv4", 00:20:30.948 "traddr": "10.0.0.1", 00:20:30.948 "trsvcid": "51104" 00:20:30.948 }, 00:20:30.948 "auth": { 00:20:30.948 "state": "completed", 00:20:30.948 "digest": "sha256", 00:20:30.948 "dhgroup": "ffdhe4096" 00:20:30.948 } 00:20:30.948 } 00:20:30.948 ]' 00:20:30.948 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.948 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.948 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.948 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:30.948 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.948 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.948 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.948 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.206 20:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:20:31.206 20:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:20:32.145 20:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.145 20:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.145 20:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.145 20:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.145 20:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.145 20:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.145 20:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:32.145 20:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:32.404 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:32.404 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.404 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.404 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:32.404 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:32.404 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.404 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:32.404 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.404 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.404 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.404 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:32.404 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.404 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.663 00:20:32.663 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.663 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.663 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.922 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.922 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.922 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.922 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.922 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.922 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.922 { 00:20:32.922 "cntlid": 31, 00:20:32.922 "qid": 0, 00:20:32.922 "state": "enabled", 00:20:32.922 "thread": "nvmf_tgt_poll_group_000", 00:20:32.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:32.922 "listen_address": { 00:20:32.922 "trtype": "TCP", 00:20:32.922 "adrfam": "IPv4", 00:20:32.922 "traddr": "10.0.0.2", 00:20:32.922 "trsvcid": "4420" 00:20:32.922 }, 00:20:32.922 "peer_address": { 00:20:32.922 "trtype": "TCP", 00:20:32.922 "adrfam": "IPv4", 00:20:32.922 "traddr": "10.0.0.1", 00:20:32.922 "trsvcid": "51120" 00:20:32.922 }, 00:20:32.922 "auth": { 00:20:32.922 "state": "completed", 00:20:32.922 "digest": "sha256", 00:20:32.922 "dhgroup": "ffdhe4096" 00:20:32.922 } 00:20:32.922 } 00:20:32.922 ]' 00:20:32.922 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.181 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:33.181 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.181 20:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.181 20:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.181 20:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.181 20:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.181 20:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.440 20:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:20:33.440 20:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:20:34.375 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.375 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.375 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.375 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.375 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.375 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.375 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.375 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:34.375 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:34.634 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:34.634 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.634 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:34.634 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:34.634 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:34.634 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.634 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.634 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.634 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.634 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.634 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.634 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.634 20:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.203 00:20:35.203 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.203 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.203 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.461 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.461 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.461 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.461 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.461 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.461 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.461 { 00:20:35.461 "cntlid": 33, 00:20:35.461 "qid": 0, 00:20:35.461 "state": "enabled", 00:20:35.461 "thread": "nvmf_tgt_poll_group_000", 00:20:35.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:35.461 "listen_address": { 00:20:35.461 "trtype": "TCP", 00:20:35.461 "adrfam": "IPv4", 00:20:35.461 "traddr": "10.0.0.2", 00:20:35.461 "trsvcid": "4420" 00:20:35.461 }, 00:20:35.461 "peer_address": { 00:20:35.461 "trtype": "TCP", 00:20:35.461 "adrfam": "IPv4", 00:20:35.461 "traddr": "10.0.0.1", 00:20:35.461 "trsvcid": "51150" 00:20:35.461 }, 00:20:35.461 "auth": { 00:20:35.461 "state": "completed", 00:20:35.461 "digest": "sha256", 00:20:35.461 "dhgroup": "ffdhe6144" 00:20:35.461 } 00:20:35.461 } 00:20:35.461 ]' 00:20:35.461 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.461 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.461 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.461 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:35.462 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.462 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.462 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.462 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.721 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:20:35.721 20:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:20:36.662 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.662 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.662 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.662 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.662 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.662 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.662 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.662 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.921 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:36.921 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.921 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:36.921 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:36.921 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:36.921 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.921 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.921 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.921 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.921 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.921 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.921 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.921 20:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.486 00:20:37.486 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.486 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.486 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.744 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.744 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.744 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.744 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.744 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.744 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.744 { 00:20:37.744 "cntlid": 35, 00:20:37.744 "qid": 0, 00:20:37.744 "state": "enabled", 00:20:37.744 "thread": "nvmf_tgt_poll_group_000", 00:20:37.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.744 "listen_address": { 00:20:37.744 "trtype": "TCP", 00:20:37.744 "adrfam": "IPv4", 00:20:37.744 "traddr": "10.0.0.2", 00:20:37.744 "trsvcid": "4420" 00:20:37.744 }, 00:20:37.744 "peer_address": { 00:20:37.744 "trtype": "TCP", 00:20:37.744 "adrfam": "IPv4", 00:20:37.744 "traddr": "10.0.0.1", 00:20:37.744 "trsvcid": "51166" 00:20:37.744 }, 00:20:37.744 "auth": { 00:20:37.744 "state": "completed", 00:20:37.744 "digest": "sha256", 00:20:37.744 "dhgroup": "ffdhe6144" 00:20:37.744 } 00:20:37.744 } 00:20:37.744 ]' 00:20:37.744 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.744 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.744 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.744 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.744 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.004 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.004 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.004 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.263 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:20:38.263 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:20:39.200 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.200 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.200 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.200 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.201 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.201 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.201 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:39.201 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:39.459 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:39.459 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.459 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:39.459 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:39.459 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:39.459 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.459 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.459 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.459 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.459 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.459 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.459 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.459 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.028 00:20:40.028 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.028 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.028 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.286 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.286 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.286 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.286 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.287 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.287 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.287 { 00:20:40.287 "cntlid": 37, 00:20:40.287 "qid": 0, 00:20:40.287 "state": "enabled", 00:20:40.287 "thread": "nvmf_tgt_poll_group_000", 00:20:40.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:40.287 "listen_address": { 00:20:40.287 "trtype": "TCP", 00:20:40.287 "adrfam": "IPv4", 00:20:40.287 "traddr": "10.0.0.2", 00:20:40.287 "trsvcid": "4420" 00:20:40.287 }, 00:20:40.287 "peer_address": { 00:20:40.287 "trtype": "TCP", 00:20:40.287 "adrfam": "IPv4", 00:20:40.287 "traddr": "10.0.0.1", 00:20:40.287 "trsvcid": "47702" 00:20:40.287 }, 00:20:40.287 "auth": { 00:20:40.287 "state": "completed", 00:20:40.287 "digest": "sha256", 00:20:40.287 "dhgroup": "ffdhe6144" 00:20:40.287 } 00:20:40.287 } 00:20:40.287 ]' 00:20:40.287 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.287 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.287 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.287 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:40.287 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.287 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.287 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.287 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.545 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:20:40.545 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:20:41.484 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.484 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.484 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.484 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.484 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.484 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.484 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:41.484 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:41.743 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:41.743 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.744 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:41.744 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.744 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:41.744 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.744 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:41.744 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.744 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.744 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.744 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:41.744 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.744 20:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.310 00:20:42.310 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.310 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.310 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.568 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.568 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.568 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.568 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.568 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.568 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.568 { 00:20:42.568 "cntlid": 39, 00:20:42.568 "qid": 0, 00:20:42.568 "state": "enabled", 00:20:42.568 "thread": "nvmf_tgt_poll_group_000", 00:20:42.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:42.568 "listen_address": { 00:20:42.568 "trtype": "TCP", 00:20:42.568 "adrfam": "IPv4", 00:20:42.568 "traddr": "10.0.0.2", 00:20:42.568 "trsvcid": "4420" 00:20:42.568 }, 00:20:42.568 "peer_address": { 00:20:42.568 "trtype": "TCP", 00:20:42.569 "adrfam": "IPv4", 00:20:42.569 "traddr": "10.0.0.1", 00:20:42.569 "trsvcid": "47728" 00:20:42.569 }, 00:20:42.569 "auth": { 00:20:42.569 "state": "completed", 00:20:42.569 "digest": "sha256", 00:20:42.569 "dhgroup": "ffdhe6144" 00:20:42.569 } 00:20:42.569 } 00:20:42.569 ]' 00:20:42.569 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.569 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.569 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.569 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.569 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.569 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.569 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.569 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.137 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:20:43.137 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:20:43.706 20:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.964 20:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.964 20:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.964 20:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.964 20:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.964 20:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.964 20:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.964 20:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:43.964 20:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:44.223 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:44.223 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.223 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:44.223 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:44.223 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:44.223 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.223 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.223 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.223 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.223 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.223 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.223 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.223 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.159 00:20:45.159 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.159 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.159 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.159 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.159 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.159 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.159 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.159 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.159 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.159 { 00:20:45.159 "cntlid": 41, 00:20:45.159 "qid": 0, 00:20:45.159 "state": "enabled", 00:20:45.159 "thread": "nvmf_tgt_poll_group_000", 00:20:45.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.159 "listen_address": { 00:20:45.159 "trtype": "TCP", 00:20:45.159 "adrfam": "IPv4", 00:20:45.159 "traddr": "10.0.0.2", 00:20:45.159 "trsvcid": "4420" 00:20:45.159 }, 00:20:45.159 "peer_address": { 00:20:45.159 "trtype": "TCP", 00:20:45.159 "adrfam": "IPv4", 00:20:45.159 "traddr": "10.0.0.1", 00:20:45.159 "trsvcid": "47758" 00:20:45.159 }, 00:20:45.159 "auth": { 00:20:45.159 "state": "completed", 00:20:45.159 "digest": "sha256", 00:20:45.159 "dhgroup": "ffdhe8192" 00:20:45.159 } 00:20:45.159 } 00:20:45.159 ]' 00:20:45.159 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.159 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.159 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.417 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:45.417 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.417 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.417 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.417 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.675 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:20:45.675 20:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:20:46.610 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.610 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.610 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.610 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.610 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.610 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.610 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:46.610 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:46.868 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:46.868 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.868 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:46.868 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:46.868 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:46.868 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.868 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.868 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.868 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.868 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.868 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.868 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.868 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.803 00:20:47.803 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.803 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.804 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.804 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.804 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.804 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.804 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.804 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.804 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.804 { 00:20:47.804 "cntlid": 43, 00:20:47.804 "qid": 0, 00:20:47.804 "state": "enabled", 00:20:47.804 "thread": "nvmf_tgt_poll_group_000", 00:20:47.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:47.804 "listen_address": { 00:20:47.804 "trtype": "TCP", 00:20:47.804 "adrfam": "IPv4", 00:20:47.804 "traddr": "10.0.0.2", 00:20:47.804 "trsvcid": "4420" 00:20:47.804 }, 00:20:47.804 "peer_address": { 00:20:47.804 "trtype": "TCP", 00:20:47.804 "adrfam": "IPv4", 00:20:47.804 "traddr": "10.0.0.1", 00:20:47.804 "trsvcid": "47784" 00:20:47.804 }, 00:20:47.804 "auth": { 00:20:47.804 "state": "completed", 00:20:47.804 "digest": "sha256", 00:20:47.804 "dhgroup": "ffdhe8192" 00:20:47.804 } 00:20:47.804 } 00:20:47.804 ]' 00:20:47.804 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.063 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.063 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.063 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.063 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.063 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.063 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.063 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.322 20:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:20:48.322 20:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:20:49.257 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.257 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.257 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.257 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.257 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.257 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.257 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.257 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.516 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:49.516 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.516 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:49.516 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:49.516 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.516 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.516 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.516 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.516 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.516 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.516 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.516 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.516 20:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.454 00:20:50.454 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.454 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.454 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.454 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.454 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.454 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.454 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.454 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.454 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.454 { 00:20:50.454 "cntlid": 45, 00:20:50.454 "qid": 0, 00:20:50.454 "state": "enabled", 00:20:50.454 "thread": "nvmf_tgt_poll_group_000", 00:20:50.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.454 "listen_address": { 00:20:50.454 "trtype": "TCP", 00:20:50.454 "adrfam": "IPv4", 00:20:50.454 "traddr": "10.0.0.2", 00:20:50.454 "trsvcid": "4420" 00:20:50.454 }, 00:20:50.454 "peer_address": { 00:20:50.454 "trtype": "TCP", 00:20:50.454 "adrfam": "IPv4", 00:20:50.454 "traddr": "10.0.0.1", 00:20:50.454 "trsvcid": "38414" 00:20:50.454 }, 00:20:50.454 "auth": { 00:20:50.454 "state": "completed", 00:20:50.454 "digest": "sha256", 00:20:50.454 "dhgroup": "ffdhe8192" 00:20:50.454 } 00:20:50.454 } 00:20:50.454 ]' 00:20:50.454 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.713 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.713 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.713 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.713 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.713 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.713 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.713 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.972 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:20:50.972 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:20:51.911 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.911 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.911 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.911 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.911 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.911 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.911 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:51.911 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:52.170 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:52.170 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.170 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:52.170 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:52.170 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:52.170 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.170 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:52.170 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.170 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.170 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.170 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:52.171 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.171 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.109 00:20:53.109 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.109 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.109 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.109 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.109 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.109 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.109 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.109 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.109 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.109 { 00:20:53.109 "cntlid": 47, 00:20:53.109 "qid": 0, 00:20:53.109 "state": "enabled", 00:20:53.109 "thread": "nvmf_tgt_poll_group_000", 00:20:53.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.109 "listen_address": { 00:20:53.109 "trtype": "TCP", 00:20:53.109 "adrfam": "IPv4", 00:20:53.109 "traddr": "10.0.0.2", 00:20:53.109 "trsvcid": "4420" 00:20:53.109 }, 00:20:53.109 "peer_address": { 00:20:53.109 "trtype": "TCP", 00:20:53.109 "adrfam": "IPv4", 00:20:53.109 "traddr": "10.0.0.1", 00:20:53.109 "trsvcid": "38438" 00:20:53.109 }, 00:20:53.109 "auth": { 00:20:53.109 "state": "completed", 00:20:53.109 "digest": "sha256", 00:20:53.109 "dhgroup": "ffdhe8192" 00:20:53.109 } 00:20:53.109 } 00:20:53.109 ]' 00:20:53.109 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.368 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:53.368 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.368 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.368 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.368 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.368 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.368 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.626 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:20:53.626 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:20:54.562 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.562 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.562 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.562 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.562 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.562 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:54.562 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.562 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.562 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:54.562 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:54.820 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:54.820 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.820 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.820 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:54.820 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:54.820 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.820 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.821 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.821 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.821 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.821 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.821 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.821 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.079 00:20:55.079 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.079 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.079 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.337 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.337 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.337 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.337 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.337 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.337 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.337 { 00:20:55.337 "cntlid": 49, 00:20:55.337 "qid": 0, 00:20:55.337 "state": "enabled", 00:20:55.337 "thread": "nvmf_tgt_poll_group_000", 00:20:55.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.337 "listen_address": { 00:20:55.337 "trtype": "TCP", 00:20:55.337 "adrfam": "IPv4", 00:20:55.337 "traddr": "10.0.0.2", 00:20:55.337 "trsvcid": "4420" 00:20:55.337 }, 00:20:55.337 "peer_address": { 00:20:55.337 "trtype": "TCP", 00:20:55.337 "adrfam": "IPv4", 00:20:55.337 "traddr": "10.0.0.1", 00:20:55.337 "trsvcid": "38478" 00:20:55.337 }, 00:20:55.337 "auth": { 00:20:55.337 "state": "completed", 00:20:55.337 "digest": "sha384", 00:20:55.337 "dhgroup": "null" 00:20:55.337 } 00:20:55.337 } 00:20:55.337 ]' 00:20:55.337 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.337 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.337 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.596 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:55.596 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.596 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.596 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.596 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.854 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:20:55.854 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:20:56.788 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.788 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.788 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.788 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.788 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.788 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.788 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:56.788 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:57.046 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:57.046 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.046 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.046 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.046 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:57.046 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.046 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.046 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.046 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.046 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.046 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.046 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.046 20:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.310 00:20:57.310 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.310 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.310 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.568 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.568 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.568 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.568 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.568 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.568 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.568 { 00:20:57.568 "cntlid": 51, 00:20:57.568 "qid": 0, 00:20:57.568 "state": "enabled", 00:20:57.568 "thread": "nvmf_tgt_poll_group_000", 00:20:57.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:57.568 "listen_address": { 00:20:57.568 "trtype": "TCP", 00:20:57.568 "adrfam": "IPv4", 00:20:57.568 "traddr": "10.0.0.2", 00:20:57.568 "trsvcid": "4420" 00:20:57.568 }, 00:20:57.568 "peer_address": { 00:20:57.568 "trtype": "TCP", 00:20:57.568 "adrfam": "IPv4", 00:20:57.568 "traddr": "10.0.0.1", 00:20:57.568 "trsvcid": "38502" 00:20:57.568 }, 00:20:57.568 "auth": { 00:20:57.568 "state": "completed", 00:20:57.568 "digest": "sha384", 00:20:57.568 "dhgroup": "null" 00:20:57.568 } 00:20:57.568 } 00:20:57.568 ]' 00:20:57.568 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.568 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.568 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.568 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:57.568 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.827 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.827 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.827 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.085 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:20:58.085 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:20:59.022 20:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.022 20:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.022 20:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.022 20:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.022 20:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.023 20:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.023 20:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:59.023 20:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:59.281 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:59.281 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.281 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.281 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:59.281 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:59.281 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.281 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.281 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.281 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.281 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.281 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.281 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.281 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.539 00:20:59.539 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.539 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.539 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.798 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.798 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.798 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.798 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.798 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.798 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.798 { 00:20:59.798 "cntlid": 53, 00:20:59.798 "qid": 0, 00:20:59.798 "state": "enabled", 00:20:59.798 "thread": "nvmf_tgt_poll_group_000", 00:20:59.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:59.798 "listen_address": { 00:20:59.798 "trtype": "TCP", 00:20:59.798 "adrfam": "IPv4", 00:20:59.798 "traddr": "10.0.0.2", 00:20:59.798 "trsvcid": "4420" 00:20:59.798 }, 00:20:59.798 "peer_address": { 00:20:59.798 "trtype": "TCP", 00:20:59.798 "adrfam": "IPv4", 00:20:59.798 "traddr": "10.0.0.1", 00:20:59.798 "trsvcid": "34186" 00:20:59.798 }, 00:20:59.798 "auth": { 00:20:59.798 "state": "completed", 00:20:59.798 "digest": "sha384", 00:20:59.798 "dhgroup": "null" 00:20:59.798 } 00:20:59.798 } 00:20:59.798 ]' 00:20:59.798 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.798 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.798 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.798 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:59.798 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.056 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.056 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.056 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.316 20:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:21:00.316 20:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:21:01.334 20:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.334 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.950 00:21:01.950 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.950 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.950 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.950 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.950 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.950 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.950 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.950 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.950 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.950 { 00:21:01.950 "cntlid": 55, 00:21:01.950 "qid": 0, 00:21:01.950 "state": "enabled", 00:21:01.950 "thread": "nvmf_tgt_poll_group_000", 00:21:01.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.950 "listen_address": { 00:21:01.950 "trtype": "TCP", 00:21:01.950 "adrfam": "IPv4", 00:21:01.950 "traddr": "10.0.0.2", 00:21:01.950 "trsvcid": "4420" 00:21:01.950 }, 00:21:01.950 "peer_address": { 00:21:01.950 "trtype": "TCP", 00:21:01.950 "adrfam": "IPv4", 00:21:01.950 "traddr": "10.0.0.1", 00:21:01.950 "trsvcid": "34218" 00:21:01.950 }, 00:21:01.950 "auth": { 00:21:01.950 "state": "completed", 00:21:01.950 "digest": "sha384", 00:21:01.950 "dhgroup": "null" 00:21:01.950 } 00:21:01.950 } 00:21:01.950 ]' 00:21:01.950 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.218 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.218 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.218 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:02.218 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.218 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.218 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.218 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.504 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:21:02.504 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:21:03.479 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.479 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.479 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.479 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.479 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.479 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.479 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.479 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:03.479 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:03.737 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:03.737 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.737 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.737 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:03.737 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:03.737 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.737 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.737 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.737 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.737 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.737 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.737 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.737 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.995 00:21:03.995 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.995 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.995 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.253 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.253 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.253 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.253 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.253 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.253 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.253 { 00:21:04.253 "cntlid": 57, 00:21:04.253 "qid": 0, 00:21:04.253 "state": "enabled", 00:21:04.253 "thread": "nvmf_tgt_poll_group_000", 00:21:04.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.253 "listen_address": { 00:21:04.253 "trtype": "TCP", 00:21:04.253 "adrfam": "IPv4", 00:21:04.253 "traddr": "10.0.0.2", 00:21:04.253 "trsvcid": "4420" 00:21:04.253 }, 00:21:04.253 "peer_address": { 00:21:04.253 "trtype": "TCP", 00:21:04.253 "adrfam": "IPv4", 00:21:04.253 "traddr": "10.0.0.1", 00:21:04.253 "trsvcid": "34232" 00:21:04.253 }, 00:21:04.253 "auth": { 00:21:04.253 "state": "completed", 00:21:04.253 "digest": "sha384", 00:21:04.253 "dhgroup": "ffdhe2048" 00:21:04.253 } 00:21:04.253 } 00:21:04.253 ]' 00:21:04.253 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.253 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.253 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.253 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:04.253 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.511 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.511 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.511 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.769 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:21:04.769 20:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.707 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.965 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.965 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.965 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.965 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.223 00:21:06.223 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.223 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.223 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.483 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.483 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.483 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.483 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.483 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.483 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.483 { 00:21:06.483 "cntlid": 59, 00:21:06.483 "qid": 0, 00:21:06.483 "state": "enabled", 00:21:06.483 "thread": "nvmf_tgt_poll_group_000", 00:21:06.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:06.483 "listen_address": { 00:21:06.483 "trtype": "TCP", 00:21:06.483 "adrfam": "IPv4", 00:21:06.483 "traddr": "10.0.0.2", 00:21:06.483 "trsvcid": "4420" 00:21:06.483 }, 00:21:06.483 "peer_address": { 00:21:06.483 "trtype": "TCP", 00:21:06.483 "adrfam": "IPv4", 00:21:06.483 "traddr": "10.0.0.1", 00:21:06.483 "trsvcid": "34258" 00:21:06.483 }, 00:21:06.483 "auth": { 00:21:06.483 "state": "completed", 00:21:06.483 "digest": "sha384", 00:21:06.483 "dhgroup": "ffdhe2048" 00:21:06.483 } 00:21:06.483 } 00:21:06.483 ]' 00:21:06.483 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.484 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.484 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.484 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.484 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.484 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.484 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.484 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.743 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:21:06.743 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:21:07.680 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.680 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.680 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.680 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.680 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.680 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.680 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:07.680 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:07.938 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:07.938 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.938 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.938 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:07.938 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:07.938 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.938 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.938 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.938 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.938 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.938 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.938 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.938 20:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.198 00:21:08.457 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.457 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.457 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.716 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.716 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.716 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.716 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.716 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.716 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.716 { 00:21:08.716 "cntlid": 61, 00:21:08.716 "qid": 0, 00:21:08.716 "state": "enabled", 00:21:08.716 "thread": "nvmf_tgt_poll_group_000", 00:21:08.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.716 "listen_address": { 00:21:08.716 "trtype": "TCP", 00:21:08.716 "adrfam": "IPv4", 00:21:08.716 "traddr": "10.0.0.2", 00:21:08.716 "trsvcid": "4420" 00:21:08.716 }, 00:21:08.716 "peer_address": { 00:21:08.716 "trtype": "TCP", 00:21:08.716 "adrfam": "IPv4", 00:21:08.716 "traddr": "10.0.0.1", 00:21:08.716 "trsvcid": "40650" 00:21:08.716 }, 00:21:08.716 "auth": { 00:21:08.716 "state": "completed", 00:21:08.716 "digest": "sha384", 00:21:08.716 "dhgroup": "ffdhe2048" 00:21:08.716 } 00:21:08.716 } 00:21:08.716 ]' 00:21:08.716 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.716 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.716 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.716 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.716 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.716 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.716 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.716 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.975 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:21:08.975 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:21:09.914 20:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.914 20:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.914 20:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.914 20:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.914 20:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.914 20:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.914 20:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:09.914 20:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:10.172 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:10.172 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.172 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.172 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:10.172 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:10.172 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.172 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:10.172 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.172 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.172 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.172 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:10.172 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.172 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.430 00:21:10.430 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.430 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.430 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.689 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.689 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.689 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.689 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.689 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.689 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.689 { 00:21:10.689 "cntlid": 63, 00:21:10.689 "qid": 0, 00:21:10.689 "state": "enabled", 00:21:10.689 "thread": "nvmf_tgt_poll_group_000", 00:21:10.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:10.689 "listen_address": { 00:21:10.689 "trtype": "TCP", 00:21:10.689 "adrfam": "IPv4", 00:21:10.689 "traddr": "10.0.0.2", 00:21:10.689 "trsvcid": "4420" 00:21:10.689 }, 00:21:10.689 "peer_address": { 00:21:10.689 "trtype": "TCP", 00:21:10.689 "adrfam": "IPv4", 00:21:10.689 "traddr": "10.0.0.1", 00:21:10.689 "trsvcid": "40670" 00:21:10.689 }, 00:21:10.689 "auth": { 00:21:10.689 "state": "completed", 00:21:10.689 "digest": "sha384", 00:21:10.689 "dhgroup": "ffdhe2048" 00:21:10.689 } 00:21:10.689 } 00:21:10.689 ]' 00:21:10.689 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.947 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.947 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.947 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:10.947 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.947 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.947 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.947 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.205 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:21:11.206 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:21:12.144 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.144 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.144 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.144 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.144 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.144 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.144 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.144 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:12.144 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:12.403 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:12.403 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.403 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.403 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:12.403 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.403 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.403 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.403 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.403 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.403 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.403 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.403 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.403 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.661 00:21:12.661 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.661 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.661 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.920 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.920 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.920 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.920 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.920 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.920 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.920 { 00:21:12.920 "cntlid": 65, 00:21:12.920 "qid": 0, 00:21:12.920 "state": "enabled", 00:21:12.920 "thread": "nvmf_tgt_poll_group_000", 00:21:12.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:12.920 "listen_address": { 00:21:12.920 "trtype": "TCP", 00:21:12.920 "adrfam": "IPv4", 00:21:12.920 "traddr": "10.0.0.2", 00:21:12.920 "trsvcid": "4420" 00:21:12.920 }, 00:21:12.920 "peer_address": { 00:21:12.920 "trtype": "TCP", 00:21:12.920 "adrfam": "IPv4", 00:21:12.920 "traddr": "10.0.0.1", 00:21:12.920 "trsvcid": "40704" 00:21:12.920 }, 00:21:12.920 "auth": { 00:21:12.920 "state": "completed", 00:21:12.920 "digest": "sha384", 00:21:12.920 "dhgroup": "ffdhe3072" 00:21:12.920 } 00:21:12.920 } 00:21:12.920 ]' 00:21:12.920 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.179 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.179 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.179 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.179 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.179 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.179 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.179 20:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.437 20:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:21:13.437 20:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:21:14.371 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.371 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.371 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.371 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.371 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.371 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.371 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:14.371 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:14.629 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:14.629 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.629 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.629 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:14.629 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.629 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.629 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.629 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.629 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.629 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.629 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.629 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.629 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.887 00:21:14.887 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.887 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.887 20:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.146 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.146 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.146 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.146 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.146 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.146 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.146 { 00:21:15.146 "cntlid": 67, 00:21:15.146 "qid": 0, 00:21:15.146 "state": "enabled", 00:21:15.146 "thread": "nvmf_tgt_poll_group_000", 00:21:15.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.146 "listen_address": { 00:21:15.146 "trtype": "TCP", 00:21:15.146 "adrfam": "IPv4", 00:21:15.146 "traddr": "10.0.0.2", 00:21:15.146 "trsvcid": "4420" 00:21:15.146 }, 00:21:15.146 "peer_address": { 00:21:15.146 "trtype": "TCP", 00:21:15.146 "adrfam": "IPv4", 00:21:15.146 "traddr": "10.0.0.1", 00:21:15.146 "trsvcid": "40736" 00:21:15.146 }, 00:21:15.146 "auth": { 00:21:15.146 "state": "completed", 00:21:15.146 "digest": "sha384", 00:21:15.146 "dhgroup": "ffdhe3072" 00:21:15.146 } 00:21:15.146 } 00:21:15.146 ]' 00:21:15.146 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.404 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.404 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.404 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.404 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.404 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.404 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.404 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.662 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:21:15.662 20:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:21:16.596 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.596 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.596 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.596 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.596 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.596 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.596 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:16.596 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:16.855 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:16.855 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.855 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.855 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:16.855 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:16.855 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.855 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.855 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.855 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.855 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.855 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.855 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.855 20:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.420 00:21:17.420 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.420 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.420 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.678 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.678 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.678 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.678 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.678 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.678 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.678 { 00:21:17.678 "cntlid": 69, 00:21:17.678 "qid": 0, 00:21:17.678 "state": "enabled", 00:21:17.678 "thread": "nvmf_tgt_poll_group_000", 00:21:17.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.678 "listen_address": { 00:21:17.678 "trtype": "TCP", 00:21:17.678 "adrfam": "IPv4", 00:21:17.678 "traddr": "10.0.0.2", 00:21:17.678 "trsvcid": "4420" 00:21:17.678 }, 00:21:17.678 "peer_address": { 00:21:17.678 "trtype": "TCP", 00:21:17.678 "adrfam": "IPv4", 00:21:17.678 "traddr": "10.0.0.1", 00:21:17.678 "trsvcid": "40744" 00:21:17.678 }, 00:21:17.678 "auth": { 00:21:17.678 "state": "completed", 00:21:17.678 "digest": "sha384", 00:21:17.678 "dhgroup": "ffdhe3072" 00:21:17.678 } 00:21:17.678 } 00:21:17.678 ]' 00:21:17.678 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.678 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.678 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.678 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:17.678 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.678 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.678 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.678 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.936 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:21:17.936 20:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:21:18.870 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.870 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.870 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.870 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.870 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.870 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.870 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:18.870 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:19.128 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:19.128 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.128 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.128 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:19.128 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:19.128 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.128 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:19.128 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.128 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.128 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.128 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:19.128 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.128 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.386 00:21:19.386 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.386 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.386 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.645 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.645 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.645 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.645 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.645 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.645 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.645 { 00:21:19.645 "cntlid": 71, 00:21:19.645 "qid": 0, 00:21:19.645 "state": "enabled", 00:21:19.645 "thread": "nvmf_tgt_poll_group_000", 00:21:19.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.645 "listen_address": { 00:21:19.645 "trtype": "TCP", 00:21:19.645 "adrfam": "IPv4", 00:21:19.645 "traddr": "10.0.0.2", 00:21:19.645 "trsvcid": "4420" 00:21:19.645 }, 00:21:19.645 "peer_address": { 00:21:19.645 "trtype": "TCP", 00:21:19.645 "adrfam": "IPv4", 00:21:19.645 "traddr": "10.0.0.1", 00:21:19.645 "trsvcid": "55334" 00:21:19.645 }, 00:21:19.645 "auth": { 00:21:19.645 "state": "completed", 00:21:19.645 "digest": "sha384", 00:21:19.645 "dhgroup": "ffdhe3072" 00:21:19.645 } 00:21:19.645 } 00:21:19.645 ]' 00:21:19.645 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.902 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.903 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.903 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:19.903 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.903 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.903 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.903 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.160 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:21:20.160 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:21:21.094 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.094 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.094 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.094 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.094 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.094 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.094 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.094 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:21.094 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:21.352 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:21.352 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.352 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:21.352 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:21.352 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:21.352 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.352 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.352 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.352 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.352 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.352 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.352 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.352 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.610 00:21:21.867 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.867 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.867 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.125 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.125 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.125 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.125 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.125 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.125 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.125 { 00:21:22.125 "cntlid": 73, 00:21:22.125 "qid": 0, 00:21:22.125 "state": "enabled", 00:21:22.125 "thread": "nvmf_tgt_poll_group_000", 00:21:22.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:22.126 "listen_address": { 00:21:22.126 "trtype": "TCP", 00:21:22.126 "adrfam": "IPv4", 00:21:22.126 "traddr": "10.0.0.2", 00:21:22.126 "trsvcid": "4420" 00:21:22.126 }, 00:21:22.126 "peer_address": { 00:21:22.126 "trtype": "TCP", 00:21:22.126 "adrfam": "IPv4", 00:21:22.126 "traddr": "10.0.0.1", 00:21:22.126 "trsvcid": "55350" 00:21:22.126 }, 00:21:22.126 "auth": { 00:21:22.126 "state": "completed", 00:21:22.126 "digest": "sha384", 00:21:22.126 "dhgroup": "ffdhe4096" 00:21:22.126 } 00:21:22.126 } 00:21:22.126 ]' 00:21:22.126 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.126 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.126 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.126 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:22.126 20:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.126 20:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.126 20:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.126 20:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.384 20:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:21:22.384 20:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:21:23.316 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.316 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.316 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.316 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.316 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.316 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.316 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:23.316 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:23.573 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:23.573 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.573 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:23.573 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:23.573 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:23.573 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.573 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.573 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.573 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.573 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.573 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.573 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.573 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.138 00:21:24.138 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.138 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.138 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.395 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.395 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.395 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.395 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.395 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.395 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.395 { 00:21:24.395 "cntlid": 75, 00:21:24.395 "qid": 0, 00:21:24.395 "state": "enabled", 00:21:24.396 "thread": "nvmf_tgt_poll_group_000", 00:21:24.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.396 "listen_address": { 00:21:24.396 "trtype": "TCP", 00:21:24.396 "adrfam": "IPv4", 00:21:24.396 "traddr": "10.0.0.2", 00:21:24.396 "trsvcid": "4420" 00:21:24.396 }, 00:21:24.396 "peer_address": { 00:21:24.396 "trtype": "TCP", 00:21:24.396 "adrfam": "IPv4", 00:21:24.396 "traddr": "10.0.0.1", 00:21:24.396 "trsvcid": "55384" 00:21:24.396 }, 00:21:24.396 "auth": { 00:21:24.396 "state": "completed", 00:21:24.396 "digest": "sha384", 00:21:24.396 "dhgroup": "ffdhe4096" 00:21:24.396 } 00:21:24.396 } 00:21:24.396 ]' 00:21:24.396 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.396 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.396 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.396 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:24.396 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.396 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.396 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.396 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.653 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:21:24.653 20:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:21:25.586 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.586 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.586 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.586 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.586 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.586 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.586 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:25.586 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:25.845 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:25.845 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.845 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:25.845 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:25.845 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:25.845 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.845 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.845 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.845 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.845 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.845 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.845 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.845 20:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.415 00:21:26.416 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.416 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.416 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.674 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.674 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.674 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.674 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.674 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.674 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.674 { 00:21:26.674 "cntlid": 77, 00:21:26.674 "qid": 0, 00:21:26.674 "state": "enabled", 00:21:26.674 "thread": "nvmf_tgt_poll_group_000", 00:21:26.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:26.674 "listen_address": { 00:21:26.674 "trtype": "TCP", 00:21:26.674 "adrfam": "IPv4", 00:21:26.674 "traddr": "10.0.0.2", 00:21:26.674 "trsvcid": "4420" 00:21:26.675 }, 00:21:26.675 "peer_address": { 00:21:26.675 "trtype": "TCP", 00:21:26.675 "adrfam": "IPv4", 00:21:26.675 "traddr": "10.0.0.1", 00:21:26.675 "trsvcid": "55422" 00:21:26.675 }, 00:21:26.675 "auth": { 00:21:26.675 "state": "completed", 00:21:26.675 "digest": "sha384", 00:21:26.675 "dhgroup": "ffdhe4096" 00:21:26.675 } 00:21:26.675 } 00:21:26.675 ]' 00:21:26.675 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.675 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.675 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.675 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:26.675 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.675 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.675 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.675 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.933 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:21:26.933 20:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:21:27.872 20:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.872 20:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.872 20:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.872 20:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.872 20:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.872 20:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.872 20:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:27.872 20:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:28.131 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:28.131 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.131 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:28.131 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:28.131 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:28.131 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.131 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:28.131 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.131 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.131 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.131 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:28.131 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.131 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.700 00:21:28.700 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.700 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.700 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.962 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.962 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.962 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.962 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.962 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.962 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.962 { 00:21:28.962 "cntlid": 79, 00:21:28.962 "qid": 0, 00:21:28.962 "state": "enabled", 00:21:28.962 "thread": "nvmf_tgt_poll_group_000", 00:21:28.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:28.962 "listen_address": { 00:21:28.962 "trtype": "TCP", 00:21:28.962 "adrfam": "IPv4", 00:21:28.962 "traddr": "10.0.0.2", 00:21:28.962 "trsvcid": "4420" 00:21:28.962 }, 00:21:28.962 "peer_address": { 00:21:28.962 "trtype": "TCP", 00:21:28.962 "adrfam": "IPv4", 00:21:28.962 "traddr": "10.0.0.1", 00:21:28.962 "trsvcid": "48968" 00:21:28.962 }, 00:21:28.962 "auth": { 00:21:28.962 "state": "completed", 00:21:28.962 "digest": "sha384", 00:21:28.962 "dhgroup": "ffdhe4096" 00:21:28.962 } 00:21:28.962 } 00:21:28.962 ]' 00:21:28.962 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.962 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.962 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.962 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:28.962 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.962 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.962 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.962 20:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.221 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:21:29.221 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:21:30.156 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.157 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.157 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.157 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.157 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.157 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.157 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.157 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:30.157 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:30.416 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:30.416 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.416 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:30.416 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:30.416 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:30.416 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.416 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.416 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.416 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.416 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.416 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.416 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.416 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.983 00:21:30.983 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.983 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.983 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.241 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.241 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.241 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.241 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.241 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.241 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.241 { 00:21:31.241 "cntlid": 81, 00:21:31.241 "qid": 0, 00:21:31.241 "state": "enabled", 00:21:31.241 "thread": "nvmf_tgt_poll_group_000", 00:21:31.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:31.241 "listen_address": { 00:21:31.241 "trtype": "TCP", 00:21:31.241 "adrfam": "IPv4", 00:21:31.241 "traddr": "10.0.0.2", 00:21:31.241 "trsvcid": "4420" 00:21:31.241 }, 00:21:31.241 "peer_address": { 00:21:31.241 "trtype": "TCP", 00:21:31.241 "adrfam": "IPv4", 00:21:31.241 "traddr": "10.0.0.1", 00:21:31.241 "trsvcid": "49010" 00:21:31.241 }, 00:21:31.241 "auth": { 00:21:31.241 "state": "completed", 00:21:31.241 "digest": "sha384", 00:21:31.241 "dhgroup": "ffdhe6144" 00:21:31.241 } 00:21:31.241 } 00:21:31.241 ]' 00:21:31.241 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.499 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.499 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.499 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:31.499 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.499 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.499 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.499 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.757 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:21:31.757 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:21:32.695 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.695 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.695 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.695 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.695 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.695 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.695 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:32.695 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:32.953 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:32.953 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.953 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:32.953 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:32.953 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:32.953 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.953 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.953 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.953 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.953 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.953 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.953 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.953 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.521 00:21:33.521 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.521 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.521 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.780 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.780 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.780 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.780 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.780 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.780 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.780 { 00:21:33.780 "cntlid": 83, 00:21:33.780 "qid": 0, 00:21:33.780 "state": "enabled", 00:21:33.780 "thread": "nvmf_tgt_poll_group_000", 00:21:33.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.780 "listen_address": { 00:21:33.780 "trtype": "TCP", 00:21:33.780 "adrfam": "IPv4", 00:21:33.780 "traddr": "10.0.0.2", 00:21:33.780 "trsvcid": "4420" 00:21:33.780 }, 00:21:33.780 "peer_address": { 00:21:33.780 "trtype": "TCP", 00:21:33.780 "adrfam": "IPv4", 00:21:33.780 "traddr": "10.0.0.1", 00:21:33.780 "trsvcid": "49034" 00:21:33.780 }, 00:21:33.780 "auth": { 00:21:33.780 "state": "completed", 00:21:33.780 "digest": "sha384", 00:21:33.780 "dhgroup": "ffdhe6144" 00:21:33.780 } 00:21:33.780 } 00:21:33.780 ]' 00:21:33.780 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.780 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.780 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.780 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:33.780 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.780 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.780 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.780 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.038 20:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:21:34.038 20:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:21:34.973 20:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.973 20:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.973 20:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.973 20:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.973 20:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.973 20:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.973 20:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:34.973 20:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:35.232 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:35.232 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.232 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:35.232 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:35.232 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:35.232 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.232 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.232 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.232 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.232 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.232 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.232 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.232 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.799 00:21:35.799 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.799 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.799 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.057 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.057 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.057 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.057 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.057 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.057 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.057 { 00:21:36.057 "cntlid": 85, 00:21:36.057 "qid": 0, 00:21:36.057 "state": "enabled", 00:21:36.057 "thread": "nvmf_tgt_poll_group_000", 00:21:36.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.057 "listen_address": { 00:21:36.057 "trtype": "TCP", 00:21:36.057 "adrfam": "IPv4", 00:21:36.057 "traddr": "10.0.0.2", 00:21:36.057 "trsvcid": "4420" 00:21:36.057 }, 00:21:36.057 "peer_address": { 00:21:36.057 "trtype": "TCP", 00:21:36.057 "adrfam": "IPv4", 00:21:36.057 "traddr": "10.0.0.1", 00:21:36.057 "trsvcid": "49078" 00:21:36.057 }, 00:21:36.057 "auth": { 00:21:36.057 "state": "completed", 00:21:36.057 "digest": "sha384", 00:21:36.057 "dhgroup": "ffdhe6144" 00:21:36.057 } 00:21:36.057 } 00:21:36.057 ]' 00:21:36.057 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.057 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.057 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.315 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:36.315 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.315 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.315 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.315 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.573 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:21:36.573 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:21:37.509 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.509 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.509 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.509 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.509 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.509 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.509 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:37.509 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:37.766 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:37.766 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.766 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:37.766 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:37.766 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:37.766 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.766 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:37.766 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.766 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.766 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.766 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:37.766 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.766 20:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.333 00:21:38.333 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.333 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.333 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.591 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.591 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.591 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.591 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.591 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.591 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.591 { 00:21:38.591 "cntlid": 87, 00:21:38.591 "qid": 0, 00:21:38.591 "state": "enabled", 00:21:38.591 "thread": "nvmf_tgt_poll_group_000", 00:21:38.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:38.591 "listen_address": { 00:21:38.591 "trtype": "TCP", 00:21:38.591 "adrfam": "IPv4", 00:21:38.591 "traddr": "10.0.0.2", 00:21:38.591 "trsvcid": "4420" 00:21:38.591 }, 00:21:38.591 "peer_address": { 00:21:38.591 "trtype": "TCP", 00:21:38.591 "adrfam": "IPv4", 00:21:38.591 "traddr": "10.0.0.1", 00:21:38.591 "trsvcid": "56126" 00:21:38.591 }, 00:21:38.591 "auth": { 00:21:38.591 "state": "completed", 00:21:38.591 "digest": "sha384", 00:21:38.591 "dhgroup": "ffdhe6144" 00:21:38.591 } 00:21:38.591 } 00:21:38.591 ]' 00:21:38.591 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.591 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.591 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.591 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:38.591 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.591 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.591 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.591 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.157 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:21:39.157 20:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:21:39.723 20:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.982 20:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.982 20:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.982 20:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.982 20:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.982 20:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:39.982 20:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.982 20:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:39.982 20:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:40.240 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:40.240 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.240 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:40.240 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:40.240 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.240 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.240 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.240 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.240 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.240 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.240 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.240 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.240 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.180 00:21:41.180 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.180 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.180 20:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.180 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.180 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.180 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.180 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.180 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.180 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.180 { 00:21:41.180 "cntlid": 89, 00:21:41.180 "qid": 0, 00:21:41.180 "state": "enabled", 00:21:41.180 "thread": "nvmf_tgt_poll_group_000", 00:21:41.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.180 "listen_address": { 00:21:41.180 "trtype": "TCP", 00:21:41.180 "adrfam": "IPv4", 00:21:41.180 "traddr": "10.0.0.2", 00:21:41.180 "trsvcid": "4420" 00:21:41.180 }, 00:21:41.180 "peer_address": { 00:21:41.180 "trtype": "TCP", 00:21:41.180 "adrfam": "IPv4", 00:21:41.180 "traddr": "10.0.0.1", 00:21:41.180 "trsvcid": "56158" 00:21:41.180 }, 00:21:41.180 "auth": { 00:21:41.180 "state": "completed", 00:21:41.180 "digest": "sha384", 00:21:41.180 "dhgroup": "ffdhe8192" 00:21:41.180 } 00:21:41.180 } 00:21:41.180 ]' 00:21:41.180 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.438 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:41.438 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.438 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:41.438 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.438 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.438 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.438 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.696 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:21:41.696 20:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:21:42.629 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.629 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.629 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.629 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.629 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.629 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.629 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:42.629 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:42.887 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:42.887 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.887 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:42.887 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:42.887 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.887 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.887 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.887 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.887 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.887 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.887 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.887 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.888 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.825 00:21:43.825 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.825 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.825 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.083 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.083 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.083 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.083 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.083 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.083 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.083 { 00:21:44.083 "cntlid": 91, 00:21:44.083 "qid": 0, 00:21:44.083 "state": "enabled", 00:21:44.083 "thread": "nvmf_tgt_poll_group_000", 00:21:44.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:44.083 "listen_address": { 00:21:44.083 "trtype": "TCP", 00:21:44.083 "adrfam": "IPv4", 00:21:44.083 "traddr": "10.0.0.2", 00:21:44.083 "trsvcid": "4420" 00:21:44.083 }, 00:21:44.083 "peer_address": { 00:21:44.083 "trtype": "TCP", 00:21:44.083 "adrfam": "IPv4", 00:21:44.083 "traddr": "10.0.0.1", 00:21:44.083 "trsvcid": "56184" 00:21:44.083 }, 00:21:44.083 "auth": { 00:21:44.083 "state": "completed", 00:21:44.083 "digest": "sha384", 00:21:44.083 "dhgroup": "ffdhe8192" 00:21:44.083 } 00:21:44.083 } 00:21:44.083 ]' 00:21:44.083 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.084 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:44.084 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.084 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:44.084 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.084 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.084 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.084 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.342 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:21:44.342 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:21:45.280 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.280 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.280 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.280 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.280 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.280 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.280 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:45.280 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:45.538 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:45.538 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.538 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:45.538 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:45.538 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:45.538 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.538 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.538 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.538 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.538 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.538 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.538 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.538 20:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.477 00:21:46.477 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.477 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.477 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.478 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.478 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.478 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.478 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.478 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.478 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.478 { 00:21:46.478 "cntlid": 93, 00:21:46.478 "qid": 0, 00:21:46.478 "state": "enabled", 00:21:46.478 "thread": "nvmf_tgt_poll_group_000", 00:21:46.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.478 "listen_address": { 00:21:46.478 "trtype": "TCP", 00:21:46.478 "adrfam": "IPv4", 00:21:46.478 "traddr": "10.0.0.2", 00:21:46.478 "trsvcid": "4420" 00:21:46.478 }, 00:21:46.478 "peer_address": { 00:21:46.478 "trtype": "TCP", 00:21:46.478 "adrfam": "IPv4", 00:21:46.478 "traddr": "10.0.0.1", 00:21:46.478 "trsvcid": "56216" 00:21:46.478 }, 00:21:46.478 "auth": { 00:21:46.478 "state": "completed", 00:21:46.478 "digest": "sha384", 00:21:46.478 "dhgroup": "ffdhe8192" 00:21:46.478 } 00:21:46.478 } 00:21:46.478 ]' 00:21:46.736 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.736 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.736 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.736 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:46.736 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.736 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.736 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.736 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.995 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:21:46.995 20:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:21:47.930 20:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.930 20:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.930 20:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.930 20:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.930 20:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.930 20:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.930 20:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:47.930 20:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:48.188 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:48.188 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.188 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:48.188 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:48.188 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:48.188 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.188 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:48.188 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.188 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.188 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.188 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:48.188 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.188 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.127 00:21:49.127 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.127 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.127 20:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.385 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.385 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.385 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.385 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.385 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.385 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.385 { 00:21:49.385 "cntlid": 95, 00:21:49.385 "qid": 0, 00:21:49.385 "state": "enabled", 00:21:49.385 "thread": "nvmf_tgt_poll_group_000", 00:21:49.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.385 "listen_address": { 00:21:49.385 "trtype": "TCP", 00:21:49.385 "adrfam": "IPv4", 00:21:49.385 "traddr": "10.0.0.2", 00:21:49.385 "trsvcid": "4420" 00:21:49.385 }, 00:21:49.385 "peer_address": { 00:21:49.385 "trtype": "TCP", 00:21:49.385 "adrfam": "IPv4", 00:21:49.385 "traddr": "10.0.0.1", 00:21:49.385 "trsvcid": "60652" 00:21:49.385 }, 00:21:49.385 "auth": { 00:21:49.385 "state": "completed", 00:21:49.385 "digest": "sha384", 00:21:49.385 "dhgroup": "ffdhe8192" 00:21:49.385 } 00:21:49.385 } 00:21:49.385 ]' 00:21:49.385 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.385 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:49.385 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.385 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.385 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.385 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.385 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.386 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.646 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:21:49.646 20:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:21:50.584 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.584 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.584 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.584 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.584 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.584 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:50.584 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.584 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.584 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:50.584 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:50.843 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:50.843 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.843 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.843 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:50.843 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:50.843 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.843 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.843 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.843 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.843 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.843 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.843 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.843 20:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.410 00:21:51.410 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.410 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.410 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.669 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.669 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.669 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.669 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.669 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.669 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.669 { 00:21:51.669 "cntlid": 97, 00:21:51.669 "qid": 0, 00:21:51.669 "state": "enabled", 00:21:51.669 "thread": "nvmf_tgt_poll_group_000", 00:21:51.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:51.669 "listen_address": { 00:21:51.669 "trtype": "TCP", 00:21:51.669 "adrfam": "IPv4", 00:21:51.669 "traddr": "10.0.0.2", 00:21:51.669 "trsvcid": "4420" 00:21:51.669 }, 00:21:51.669 "peer_address": { 00:21:51.669 "trtype": "TCP", 00:21:51.669 "adrfam": "IPv4", 00:21:51.669 "traddr": "10.0.0.1", 00:21:51.669 "trsvcid": "60674" 00:21:51.669 }, 00:21:51.669 "auth": { 00:21:51.669 "state": "completed", 00:21:51.669 "digest": "sha512", 00:21:51.669 "dhgroup": "null" 00:21:51.669 } 00:21:51.669 } 00:21:51.669 ]' 00:21:51.669 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.669 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.669 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.669 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:51.669 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.669 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.669 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.669 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.928 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:21:51.928 20:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:21:52.864 20:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.864 20:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.864 20:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.864 20:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.864 20:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.864 20:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.864 20:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:52.864 20:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:53.123 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:53.123 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.123 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.123 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:53.123 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:53.123 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.123 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.123 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.123 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.123 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.123 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.123 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.123 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.381 00:21:53.381 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.381 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.381 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.948 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.948 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.948 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.948 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.948 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.948 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.948 { 00:21:53.948 "cntlid": 99, 00:21:53.948 "qid": 0, 00:21:53.948 "state": "enabled", 00:21:53.948 "thread": "nvmf_tgt_poll_group_000", 00:21:53.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:53.948 "listen_address": { 00:21:53.948 "trtype": "TCP", 00:21:53.948 "adrfam": "IPv4", 00:21:53.948 "traddr": "10.0.0.2", 00:21:53.948 "trsvcid": "4420" 00:21:53.948 }, 00:21:53.948 "peer_address": { 00:21:53.948 "trtype": "TCP", 00:21:53.948 "adrfam": "IPv4", 00:21:53.948 "traddr": "10.0.0.1", 00:21:53.948 "trsvcid": "60710" 00:21:53.948 }, 00:21:53.948 "auth": { 00:21:53.948 "state": "completed", 00:21:53.948 "digest": "sha512", 00:21:53.948 "dhgroup": "null" 00:21:53.948 } 00:21:53.948 } 00:21:53.948 ]' 00:21:53.948 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.948 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.948 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.948 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:53.948 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.948 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.948 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.948 20:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.207 20:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:21:54.207 20:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:21:55.141 20:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.141 20:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.141 20:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.141 20:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.141 20:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.141 20:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.141 20:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:55.141 20:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:55.400 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:55.400 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.400 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.400 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:55.400 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:55.400 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.400 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.400 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.400 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.400 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.400 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.400 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.401 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.660 00:21:55.660 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.661 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.661 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.918 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.918 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.918 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.918 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.918 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.918 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.918 { 00:21:55.918 "cntlid": 101, 00:21:55.918 "qid": 0, 00:21:55.918 "state": "enabled", 00:21:55.918 "thread": "nvmf_tgt_poll_group_000", 00:21:55.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.918 "listen_address": { 00:21:55.918 "trtype": "TCP", 00:21:55.918 "adrfam": "IPv4", 00:21:55.918 "traddr": "10.0.0.2", 00:21:55.918 "trsvcid": "4420" 00:21:55.918 }, 00:21:55.918 "peer_address": { 00:21:55.918 "trtype": "TCP", 00:21:55.918 "adrfam": "IPv4", 00:21:55.918 "traddr": "10.0.0.1", 00:21:55.918 "trsvcid": "60732" 00:21:55.918 }, 00:21:55.918 "auth": { 00:21:55.918 "state": "completed", 00:21:55.918 "digest": "sha512", 00:21:55.918 "dhgroup": "null" 00:21:55.918 } 00:21:55.918 } 00:21:55.918 ]' 00:21:55.918 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.918 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.918 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.176 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:56.176 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.176 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.176 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.176 20:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.435 20:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:21:56.435 20:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:21:57.369 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.369 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.370 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.370 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.370 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.370 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.370 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:57.370 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:57.636 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:57.636 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.636 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.636 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:57.636 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:57.636 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.636 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:57.636 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.636 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.636 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.636 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.636 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.636 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.894 00:21:57.894 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.894 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.894 20:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.152 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.152 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.152 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.152 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.152 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.152 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.152 { 00:21:58.152 "cntlid": 103, 00:21:58.152 "qid": 0, 00:21:58.152 "state": "enabled", 00:21:58.152 "thread": "nvmf_tgt_poll_group_000", 00:21:58.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.153 "listen_address": { 00:21:58.153 "trtype": "TCP", 00:21:58.153 "adrfam": "IPv4", 00:21:58.153 "traddr": "10.0.0.2", 00:21:58.153 "trsvcid": "4420" 00:21:58.153 }, 00:21:58.153 "peer_address": { 00:21:58.153 "trtype": "TCP", 00:21:58.153 "adrfam": "IPv4", 00:21:58.153 "traddr": "10.0.0.1", 00:21:58.153 "trsvcid": "52276" 00:21:58.153 }, 00:21:58.153 "auth": { 00:21:58.153 "state": "completed", 00:21:58.153 "digest": "sha512", 00:21:58.153 "dhgroup": "null" 00:21:58.153 } 00:21:58.153 } 00:21:58.153 ]' 00:21:58.153 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.153 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.153 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.153 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:58.153 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.411 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.411 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.411 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.671 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:21:58.671 20:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.609 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.867 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.867 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.867 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.867 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.125 00:22:00.125 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.125 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.125 20:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.383 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.383 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.383 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.383 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.383 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.383 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.383 { 00:22:00.384 "cntlid": 105, 00:22:00.384 "qid": 0, 00:22:00.384 "state": "enabled", 00:22:00.384 "thread": "nvmf_tgt_poll_group_000", 00:22:00.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:00.384 "listen_address": { 00:22:00.384 "trtype": "TCP", 00:22:00.384 "adrfam": "IPv4", 00:22:00.384 "traddr": "10.0.0.2", 00:22:00.384 "trsvcid": "4420" 00:22:00.384 }, 00:22:00.384 "peer_address": { 00:22:00.384 "trtype": "TCP", 00:22:00.384 "adrfam": "IPv4", 00:22:00.384 "traddr": "10.0.0.1", 00:22:00.384 "trsvcid": "52296" 00:22:00.384 }, 00:22:00.384 "auth": { 00:22:00.384 "state": "completed", 00:22:00.384 "digest": "sha512", 00:22:00.384 "dhgroup": "ffdhe2048" 00:22:00.384 } 00:22:00.384 } 00:22:00.384 ]' 00:22:00.384 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.384 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.384 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.384 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:00.384 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.384 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.384 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.384 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.642 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:22:00.642 20:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:22:01.579 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.579 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.579 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.579 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.579 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.579 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.579 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:01.579 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:02.145 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:02.145 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.145 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.145 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:02.145 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:02.145 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.146 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.146 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.146 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.146 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.146 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.146 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.146 20:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.404 00:22:02.404 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.404 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.404 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.662 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.662 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.662 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.662 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.662 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.662 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.662 { 00:22:02.662 "cntlid": 107, 00:22:02.662 "qid": 0, 00:22:02.662 "state": "enabled", 00:22:02.662 "thread": "nvmf_tgt_poll_group_000", 00:22:02.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.662 "listen_address": { 00:22:02.662 "trtype": "TCP", 00:22:02.662 "adrfam": "IPv4", 00:22:02.662 "traddr": "10.0.0.2", 00:22:02.662 "trsvcid": "4420" 00:22:02.662 }, 00:22:02.662 "peer_address": { 00:22:02.662 "trtype": "TCP", 00:22:02.662 "adrfam": "IPv4", 00:22:02.662 "traddr": "10.0.0.1", 00:22:02.662 "trsvcid": "52324" 00:22:02.662 }, 00:22:02.662 "auth": { 00:22:02.662 "state": "completed", 00:22:02.662 "digest": "sha512", 00:22:02.662 "dhgroup": "ffdhe2048" 00:22:02.662 } 00:22:02.662 } 00:22:02.662 ]' 00:22:02.662 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.662 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.662 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.662 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:02.662 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.921 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.921 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.921 20:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.179 20:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:22:03.179 20:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:22:04.114 20:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.114 20:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.114 20:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.114 20:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.114 20:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.114 20:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.114 20:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:04.114 20:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:04.373 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:04.373 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.373 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:04.373 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:04.373 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:04.373 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.373 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.373 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.373 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.373 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.373 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.373 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.373 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.632 00:22:04.632 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.632 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.632 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.890 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.890 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.890 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.890 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.890 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.890 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.890 { 00:22:04.890 "cntlid": 109, 00:22:04.890 "qid": 0, 00:22:04.890 "state": "enabled", 00:22:04.890 "thread": "nvmf_tgt_poll_group_000", 00:22:04.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.890 "listen_address": { 00:22:04.890 "trtype": "TCP", 00:22:04.890 "adrfam": "IPv4", 00:22:04.890 "traddr": "10.0.0.2", 00:22:04.890 "trsvcid": "4420" 00:22:04.890 }, 00:22:04.890 "peer_address": { 00:22:04.890 "trtype": "TCP", 00:22:04.890 "adrfam": "IPv4", 00:22:04.890 "traddr": "10.0.0.1", 00:22:04.890 "trsvcid": "52340" 00:22:04.890 }, 00:22:04.890 "auth": { 00:22:04.890 "state": "completed", 00:22:04.890 "digest": "sha512", 00:22:04.890 "dhgroup": "ffdhe2048" 00:22:04.890 } 00:22:04.890 } 00:22:04.890 ]' 00:22:04.890 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.890 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.890 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.148 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:05.148 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.148 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.148 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.148 20:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.406 20:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:22:05.406 20:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:22:06.341 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.341 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.341 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.341 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.341 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.341 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.341 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:06.341 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:06.599 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:06.599 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.599 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.599 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:06.599 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:06.599 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.599 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:06.599 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.599 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.599 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.599 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:06.599 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.599 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.857 00:22:06.857 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.857 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.857 20:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.116 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.116 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.116 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.116 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.116 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.116 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.116 { 00:22:07.116 "cntlid": 111, 00:22:07.116 "qid": 0, 00:22:07.116 "state": "enabled", 00:22:07.116 "thread": "nvmf_tgt_poll_group_000", 00:22:07.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.116 "listen_address": { 00:22:07.116 "trtype": "TCP", 00:22:07.116 "adrfam": "IPv4", 00:22:07.116 "traddr": "10.0.0.2", 00:22:07.116 "trsvcid": "4420" 00:22:07.116 }, 00:22:07.116 "peer_address": { 00:22:07.116 "trtype": "TCP", 00:22:07.116 "adrfam": "IPv4", 00:22:07.116 "traddr": "10.0.0.1", 00:22:07.116 "trsvcid": "52362" 00:22:07.116 }, 00:22:07.116 "auth": { 00:22:07.116 "state": "completed", 00:22:07.116 "digest": "sha512", 00:22:07.116 "dhgroup": "ffdhe2048" 00:22:07.116 } 00:22:07.116 } 00:22:07.116 ]' 00:22:07.116 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.116 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.116 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.116 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:07.116 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.116 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.116 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.116 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.383 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:22:07.383 20:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:22:08.321 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.321 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.321 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.321 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.321 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.321 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.321 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.321 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:08.321 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:08.579 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:08.579 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.579 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:08.579 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:08.579 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:08.579 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.579 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.579 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.579 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.579 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.579 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.579 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.579 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.148 00:22:09.148 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.148 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.148 20:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.148 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.148 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.148 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.148 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.408 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.408 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.408 { 00:22:09.408 "cntlid": 113, 00:22:09.408 "qid": 0, 00:22:09.408 "state": "enabled", 00:22:09.408 "thread": "nvmf_tgt_poll_group_000", 00:22:09.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:09.408 "listen_address": { 00:22:09.408 "trtype": "TCP", 00:22:09.408 "adrfam": "IPv4", 00:22:09.408 "traddr": "10.0.0.2", 00:22:09.408 "trsvcid": "4420" 00:22:09.408 }, 00:22:09.408 "peer_address": { 00:22:09.408 "trtype": "TCP", 00:22:09.408 "adrfam": "IPv4", 00:22:09.408 "traddr": "10.0.0.1", 00:22:09.408 "trsvcid": "33926" 00:22:09.408 }, 00:22:09.408 "auth": { 00:22:09.408 "state": "completed", 00:22:09.408 "digest": "sha512", 00:22:09.408 "dhgroup": "ffdhe3072" 00:22:09.408 } 00:22:09.408 } 00:22:09.408 ]' 00:22:09.408 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.408 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.408 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.408 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:09.408 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.408 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.408 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.408 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.667 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:22:09.667 20:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:22:10.602 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.602 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.602 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.602 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.602 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.602 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.602 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:10.602 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:10.861 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:10.861 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.861 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.861 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:10.861 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:10.861 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.861 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.861 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.861 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.861 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.861 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.861 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.861 20:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.121 00:22:11.383 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.383 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.383 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.643 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.643 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.643 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.643 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.643 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.643 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.643 { 00:22:11.643 "cntlid": 115, 00:22:11.643 "qid": 0, 00:22:11.643 "state": "enabled", 00:22:11.644 "thread": "nvmf_tgt_poll_group_000", 00:22:11.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.644 "listen_address": { 00:22:11.644 "trtype": "TCP", 00:22:11.644 "adrfam": "IPv4", 00:22:11.644 "traddr": "10.0.0.2", 00:22:11.644 "trsvcid": "4420" 00:22:11.644 }, 00:22:11.644 "peer_address": { 00:22:11.644 "trtype": "TCP", 00:22:11.644 "adrfam": "IPv4", 00:22:11.644 "traddr": "10.0.0.1", 00:22:11.644 "trsvcid": "33958" 00:22:11.644 }, 00:22:11.644 "auth": { 00:22:11.644 "state": "completed", 00:22:11.644 "digest": "sha512", 00:22:11.644 "dhgroup": "ffdhe3072" 00:22:11.644 } 00:22:11.644 } 00:22:11.644 ]' 00:22:11.644 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.644 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.644 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.644 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:11.644 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.644 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.644 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.644 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.902 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:22:11.902 20:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:22:12.841 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.841 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.841 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.841 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.841 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.841 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.841 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.841 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:13.099 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:13.099 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.099 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:13.099 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:13.099 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:13.099 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.099 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.099 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.099 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.099 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.099 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.099 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.099 20:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.357 00:22:13.357 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.358 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.358 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.616 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.616 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.616 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.616 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.616 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.616 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.616 { 00:22:13.616 "cntlid": 117, 00:22:13.616 "qid": 0, 00:22:13.616 "state": "enabled", 00:22:13.616 "thread": "nvmf_tgt_poll_group_000", 00:22:13.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.616 "listen_address": { 00:22:13.616 "trtype": "TCP", 00:22:13.616 "adrfam": "IPv4", 00:22:13.616 "traddr": "10.0.0.2", 00:22:13.616 "trsvcid": "4420" 00:22:13.616 }, 00:22:13.616 "peer_address": { 00:22:13.616 "trtype": "TCP", 00:22:13.616 "adrfam": "IPv4", 00:22:13.616 "traddr": "10.0.0.1", 00:22:13.616 "trsvcid": "33984" 00:22:13.616 }, 00:22:13.616 "auth": { 00:22:13.616 "state": "completed", 00:22:13.616 "digest": "sha512", 00:22:13.616 "dhgroup": "ffdhe3072" 00:22:13.616 } 00:22:13.616 } 00:22:13.616 ]' 00:22:13.616 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.875 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.875 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.875 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:13.875 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.875 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.875 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.875 20:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.134 20:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:22:14.134 20:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:22:15.072 20:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.072 20:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.072 20:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.072 20:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.072 20:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.072 20:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.072 20:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:15.072 20:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:15.330 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:15.330 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.330 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:15.330 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:15.330 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:15.330 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.330 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:15.330 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.330 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.330 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.330 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:15.330 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.330 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.589 00:22:15.589 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.589 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.589 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.158 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.158 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.158 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.159 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.159 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.159 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.159 { 00:22:16.159 "cntlid": 119, 00:22:16.159 "qid": 0, 00:22:16.159 "state": "enabled", 00:22:16.159 "thread": "nvmf_tgt_poll_group_000", 00:22:16.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:16.159 "listen_address": { 00:22:16.159 "trtype": "TCP", 00:22:16.159 "adrfam": "IPv4", 00:22:16.159 "traddr": "10.0.0.2", 00:22:16.159 "trsvcid": "4420" 00:22:16.159 }, 00:22:16.159 "peer_address": { 00:22:16.159 "trtype": "TCP", 00:22:16.159 "adrfam": "IPv4", 00:22:16.159 "traddr": "10.0.0.1", 00:22:16.159 "trsvcid": "34008" 00:22:16.159 }, 00:22:16.159 "auth": { 00:22:16.159 "state": "completed", 00:22:16.159 "digest": "sha512", 00:22:16.159 "dhgroup": "ffdhe3072" 00:22:16.159 } 00:22:16.159 } 00:22:16.159 ]' 00:22:16.159 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.159 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.159 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.159 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:16.159 20:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.159 20:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.159 20:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.159 20:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.418 20:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:22:16.418 20:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:22:17.354 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.354 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.354 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.354 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.354 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.354 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.354 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.354 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:17.354 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:17.614 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:17.614 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.614 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.614 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:17.614 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:17.614 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.614 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.614 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.614 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.614 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.614 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.614 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.614 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.872 00:22:17.872 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.872 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.872 20:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.440 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.440 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.440 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.440 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.440 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.440 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.440 { 00:22:18.440 "cntlid": 121, 00:22:18.440 "qid": 0, 00:22:18.440 "state": "enabled", 00:22:18.440 "thread": "nvmf_tgt_poll_group_000", 00:22:18.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:18.440 "listen_address": { 00:22:18.440 "trtype": "TCP", 00:22:18.440 "adrfam": "IPv4", 00:22:18.440 "traddr": "10.0.0.2", 00:22:18.440 "trsvcid": "4420" 00:22:18.440 }, 00:22:18.440 "peer_address": { 00:22:18.440 "trtype": "TCP", 00:22:18.440 "adrfam": "IPv4", 00:22:18.440 "traddr": "10.0.0.1", 00:22:18.440 "trsvcid": "50946" 00:22:18.440 }, 00:22:18.440 "auth": { 00:22:18.440 "state": "completed", 00:22:18.440 "digest": "sha512", 00:22:18.440 "dhgroup": "ffdhe4096" 00:22:18.440 } 00:22:18.440 } 00:22:18.440 ]' 00:22:18.440 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.440 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.440 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.440 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:18.440 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.440 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.440 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.440 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.698 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:22:18.698 20:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:22:19.641 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.641 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.641 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.641 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.641 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.641 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.641 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:19.641 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:19.906 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:19.906 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.906 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.906 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:19.906 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:19.906 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.906 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.906 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.906 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.906 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.906 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.906 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.906 20:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.165 00:22:20.165 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.165 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.165 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.731 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.731 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.731 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.731 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.731 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.731 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.731 { 00:22:20.731 "cntlid": 123, 00:22:20.731 "qid": 0, 00:22:20.731 "state": "enabled", 00:22:20.731 "thread": "nvmf_tgt_poll_group_000", 00:22:20.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:20.731 "listen_address": { 00:22:20.731 "trtype": "TCP", 00:22:20.731 "adrfam": "IPv4", 00:22:20.731 "traddr": "10.0.0.2", 00:22:20.731 "trsvcid": "4420" 00:22:20.731 }, 00:22:20.731 "peer_address": { 00:22:20.731 "trtype": "TCP", 00:22:20.731 "adrfam": "IPv4", 00:22:20.731 "traddr": "10.0.0.1", 00:22:20.731 "trsvcid": "50968" 00:22:20.731 }, 00:22:20.732 "auth": { 00:22:20.732 "state": "completed", 00:22:20.732 "digest": "sha512", 00:22:20.732 "dhgroup": "ffdhe4096" 00:22:20.732 } 00:22:20.732 } 00:22:20.732 ]' 00:22:20.732 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.732 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.732 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.732 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:20.732 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.732 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.732 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.732 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.989 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:22:20.989 20:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:22:21.928 20:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.928 20:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.928 20:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.928 20:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.928 20:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.928 20:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.928 20:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:21.928 20:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:22.186 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:22.186 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.186 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.186 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:22.186 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:22.186 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.186 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.186 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.186 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.186 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.186 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.186 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.186 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.445 00:22:22.445 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.445 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.445 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.704 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.704 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.704 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.704 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.704 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.704 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.704 { 00:22:22.704 "cntlid": 125, 00:22:22.704 "qid": 0, 00:22:22.704 "state": "enabled", 00:22:22.704 "thread": "nvmf_tgt_poll_group_000", 00:22:22.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:22.704 "listen_address": { 00:22:22.704 "trtype": "TCP", 00:22:22.704 "adrfam": "IPv4", 00:22:22.704 "traddr": "10.0.0.2", 00:22:22.704 "trsvcid": "4420" 00:22:22.704 }, 00:22:22.704 "peer_address": { 00:22:22.704 "trtype": "TCP", 00:22:22.704 "adrfam": "IPv4", 00:22:22.704 "traddr": "10.0.0.1", 00:22:22.704 "trsvcid": "51004" 00:22:22.704 }, 00:22:22.704 "auth": { 00:22:22.704 "state": "completed", 00:22:22.704 "digest": "sha512", 00:22:22.704 "dhgroup": "ffdhe4096" 00:22:22.704 } 00:22:22.704 } 00:22:22.704 ]' 00:22:22.704 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.962 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.962 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.962 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:22.962 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.962 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.962 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.963 20:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.220 20:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:22:23.220 20:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:22:24.153 20:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.153 20:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.153 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.153 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.153 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.153 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.153 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:24.153 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:24.412 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:24.412 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.412 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:24.412 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:24.412 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:24.412 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.412 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:24.412 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.412 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.412 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.412 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:24.412 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.412 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.980 00:22:24.980 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.980 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.980 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.980 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.980 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.980 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.980 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.980 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.980 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.980 { 00:22:24.980 "cntlid": 127, 00:22:24.980 "qid": 0, 00:22:24.980 "state": "enabled", 00:22:24.980 "thread": "nvmf_tgt_poll_group_000", 00:22:24.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:24.980 "listen_address": { 00:22:24.981 "trtype": "TCP", 00:22:24.981 "adrfam": "IPv4", 00:22:24.981 "traddr": "10.0.0.2", 00:22:24.981 "trsvcid": "4420" 00:22:24.981 }, 00:22:24.981 "peer_address": { 00:22:24.981 "trtype": "TCP", 00:22:24.981 "adrfam": "IPv4", 00:22:24.981 "traddr": "10.0.0.1", 00:22:24.981 "trsvcid": "51032" 00:22:24.981 }, 00:22:24.981 "auth": { 00:22:24.981 "state": "completed", 00:22:24.981 "digest": "sha512", 00:22:24.981 "dhgroup": "ffdhe4096" 00:22:24.981 } 00:22:24.981 } 00:22:24.981 ]' 00:22:24.981 20:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.240 20:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.240 20:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.240 20:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:25.240 20:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.240 20:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.240 20:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.240 20:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.498 20:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:22:25.498 20:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:22:26.437 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.437 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.437 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.437 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.437 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.437 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:26.437 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.437 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:26.437 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:26.695 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:26.695 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.695 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:26.695 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:26.695 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:26.695 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.695 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.695 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.695 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.695 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.695 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.695 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.695 20:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.264 00:22:27.264 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.264 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.264 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.523 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.523 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.523 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.523 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.523 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.523 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.523 { 00:22:27.523 "cntlid": 129, 00:22:27.523 "qid": 0, 00:22:27.523 "state": "enabled", 00:22:27.523 "thread": "nvmf_tgt_poll_group_000", 00:22:27.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:27.523 "listen_address": { 00:22:27.523 "trtype": "TCP", 00:22:27.523 "adrfam": "IPv4", 00:22:27.523 "traddr": "10.0.0.2", 00:22:27.523 "trsvcid": "4420" 00:22:27.523 }, 00:22:27.523 "peer_address": { 00:22:27.523 "trtype": "TCP", 00:22:27.523 "adrfam": "IPv4", 00:22:27.523 "traddr": "10.0.0.1", 00:22:27.523 "trsvcid": "51068" 00:22:27.523 }, 00:22:27.523 "auth": { 00:22:27.523 "state": "completed", 00:22:27.523 "digest": "sha512", 00:22:27.523 "dhgroup": "ffdhe6144" 00:22:27.523 } 00:22:27.523 } 00:22:27.523 ]' 00:22:27.523 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.781 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.781 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.781 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:27.781 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.781 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.781 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.781 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.041 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:22:28.041 20:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:22:28.979 20:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.980 20:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.980 20:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.980 20:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.980 20:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.980 20:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.980 20:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:28.980 20:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:29.238 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:29.238 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.238 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:29.238 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:29.238 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:29.238 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.238 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.238 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.238 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.238 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.238 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.238 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.238 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.808 00:22:29.808 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.808 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.808 20:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.066 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.066 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.066 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.066 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.066 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.066 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.066 { 00:22:30.066 "cntlid": 131, 00:22:30.066 "qid": 0, 00:22:30.066 "state": "enabled", 00:22:30.066 "thread": "nvmf_tgt_poll_group_000", 00:22:30.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:30.066 "listen_address": { 00:22:30.066 "trtype": "TCP", 00:22:30.066 "adrfam": "IPv4", 00:22:30.066 "traddr": "10.0.0.2", 00:22:30.066 "trsvcid": "4420" 00:22:30.066 }, 00:22:30.066 "peer_address": { 00:22:30.066 "trtype": "TCP", 00:22:30.066 "adrfam": "IPv4", 00:22:30.066 "traddr": "10.0.0.1", 00:22:30.066 "trsvcid": "56056" 00:22:30.066 }, 00:22:30.066 "auth": { 00:22:30.066 "state": "completed", 00:22:30.066 "digest": "sha512", 00:22:30.066 "dhgroup": "ffdhe6144" 00:22:30.066 } 00:22:30.066 } 00:22:30.066 ]' 00:22:30.066 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.390 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.390 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.390 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:30.390 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.390 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.390 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.390 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.675 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:22:30.675 20:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:22:31.704 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.704 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.704 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.704 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.704 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.704 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.704 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:31.704 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:32.001 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:32.001 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.001 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:32.001 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:32.001 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:32.001 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.001 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.001 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.001 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.001 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.001 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.001 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.001 20:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.566 00:22:32.566 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.566 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.566 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.566 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.566 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.566 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.566 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.825 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.825 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.825 { 00:22:32.825 "cntlid": 133, 00:22:32.825 "qid": 0, 00:22:32.825 "state": "enabled", 00:22:32.825 "thread": "nvmf_tgt_poll_group_000", 00:22:32.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:32.825 "listen_address": { 00:22:32.825 "trtype": "TCP", 00:22:32.825 "adrfam": "IPv4", 00:22:32.825 "traddr": "10.0.0.2", 00:22:32.825 "trsvcid": "4420" 00:22:32.825 }, 00:22:32.825 "peer_address": { 00:22:32.825 "trtype": "TCP", 00:22:32.825 "adrfam": "IPv4", 00:22:32.825 "traddr": "10.0.0.1", 00:22:32.825 "trsvcid": "56098" 00:22:32.825 }, 00:22:32.825 "auth": { 00:22:32.825 "state": "completed", 00:22:32.825 "digest": "sha512", 00:22:32.825 "dhgroup": "ffdhe6144" 00:22:32.825 } 00:22:32.825 } 00:22:32.825 ]' 00:22:32.825 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.825 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.825 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.825 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:32.825 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.825 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.825 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.825 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.083 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:22:33.083 20:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:22:34.022 20:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.022 20:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.022 20:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.022 20:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.022 20:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.022 20:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.022 20:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:34.022 20:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:34.280 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:34.280 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.280 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:34.280 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:34.280 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:34.280 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.280 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:34.280 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.280 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.280 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.280 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:34.280 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:34.280 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:34.848 00:22:35.106 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.106 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.106 20:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.364 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.364 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.364 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.364 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.364 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.364 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.364 { 00:22:35.364 "cntlid": 135, 00:22:35.364 "qid": 0, 00:22:35.364 "state": "enabled", 00:22:35.364 "thread": "nvmf_tgt_poll_group_000", 00:22:35.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:35.364 "listen_address": { 00:22:35.364 "trtype": "TCP", 00:22:35.364 "adrfam": "IPv4", 00:22:35.364 "traddr": "10.0.0.2", 00:22:35.364 "trsvcid": "4420" 00:22:35.364 }, 00:22:35.364 "peer_address": { 00:22:35.364 "trtype": "TCP", 00:22:35.364 "adrfam": "IPv4", 00:22:35.364 "traddr": "10.0.0.1", 00:22:35.364 "trsvcid": "56118" 00:22:35.364 }, 00:22:35.364 "auth": { 00:22:35.364 "state": "completed", 00:22:35.364 "digest": "sha512", 00:22:35.364 "dhgroup": "ffdhe6144" 00:22:35.364 } 00:22:35.364 } 00:22:35.364 ]' 00:22:35.365 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.365 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.365 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.365 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:35.365 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.365 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.365 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.365 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.623 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:22:35.623 20:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:22:36.559 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.559 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.559 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.559 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.559 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.559 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:36.559 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.559 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:36.559 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:36.817 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:36.817 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.817 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.817 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:36.817 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:36.817 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.817 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.817 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.817 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.817 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.817 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.817 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.817 20:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.769 00:22:37.769 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.769 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.769 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.028 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.028 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.028 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.028 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.028 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.028 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.028 { 00:22:38.028 "cntlid": 137, 00:22:38.028 "qid": 0, 00:22:38.028 "state": "enabled", 00:22:38.028 "thread": "nvmf_tgt_poll_group_000", 00:22:38.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:38.028 "listen_address": { 00:22:38.028 "trtype": "TCP", 00:22:38.028 "adrfam": "IPv4", 00:22:38.028 "traddr": "10.0.0.2", 00:22:38.028 "trsvcid": "4420" 00:22:38.028 }, 00:22:38.028 "peer_address": { 00:22:38.028 "trtype": "TCP", 00:22:38.028 "adrfam": "IPv4", 00:22:38.028 "traddr": "10.0.0.1", 00:22:38.028 "trsvcid": "56142" 00:22:38.028 }, 00:22:38.028 "auth": { 00:22:38.028 "state": "completed", 00:22:38.028 "digest": "sha512", 00:22:38.028 "dhgroup": "ffdhe8192" 00:22:38.028 } 00:22:38.028 } 00:22:38.028 ]' 00:22:38.028 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.028 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.028 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.028 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:38.028 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.028 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.028 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.028 20:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.287 20:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:22:38.287 20:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:22:39.223 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.223 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.223 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.223 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.223 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.223 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.223 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:39.223 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:39.481 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:39.481 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.481 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:39.481 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:39.481 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:39.481 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.481 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.481 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.481 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.481 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.481 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.481 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.481 20:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.420 00:22:40.420 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.420 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.420 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.679 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.679 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.679 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.679 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.679 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.679 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.679 { 00:22:40.679 "cntlid": 139, 00:22:40.679 "qid": 0, 00:22:40.679 "state": "enabled", 00:22:40.679 "thread": "nvmf_tgt_poll_group_000", 00:22:40.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:40.679 "listen_address": { 00:22:40.679 "trtype": "TCP", 00:22:40.679 "adrfam": "IPv4", 00:22:40.679 "traddr": "10.0.0.2", 00:22:40.679 "trsvcid": "4420" 00:22:40.679 }, 00:22:40.679 "peer_address": { 00:22:40.679 "trtype": "TCP", 00:22:40.679 "adrfam": "IPv4", 00:22:40.679 "traddr": "10.0.0.1", 00:22:40.679 "trsvcid": "55436" 00:22:40.679 }, 00:22:40.679 "auth": { 00:22:40.679 "state": "completed", 00:22:40.679 "digest": "sha512", 00:22:40.679 "dhgroup": "ffdhe8192" 00:22:40.679 } 00:22:40.679 } 00:22:40.679 ]' 00:22:40.679 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.679 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.679 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.679 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:40.679 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.679 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.679 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.679 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.938 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:22:40.938 20:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: --dhchap-ctrl-secret DHHC-1:02:ZjkyYmVhYWY1NTNmMjcyMWEyMjI4OGY2NDI0MjI2YWZkZWJlOTdlMTIzYzMxNTJiig7C0w==: 00:22:41.890 20:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.890 20:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.890 20:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.890 20:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.890 20:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.890 20:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.890 20:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:41.890 20:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:42.148 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:42.148 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.148 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:42.148 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:42.148 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:42.148 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.148 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.148 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.148 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.148 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.149 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.149 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.149 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.086 00:22:43.086 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.086 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.086 20:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.344 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.344 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.344 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.344 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.344 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.344 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.344 { 00:22:43.344 "cntlid": 141, 00:22:43.344 "qid": 0, 00:22:43.344 "state": "enabled", 00:22:43.344 "thread": "nvmf_tgt_poll_group_000", 00:22:43.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:43.344 "listen_address": { 00:22:43.344 "trtype": "TCP", 00:22:43.344 "adrfam": "IPv4", 00:22:43.344 "traddr": "10.0.0.2", 00:22:43.344 "trsvcid": "4420" 00:22:43.344 }, 00:22:43.344 "peer_address": { 00:22:43.344 "trtype": "TCP", 00:22:43.344 "adrfam": "IPv4", 00:22:43.344 "traddr": "10.0.0.1", 00:22:43.344 "trsvcid": "55454" 00:22:43.344 }, 00:22:43.344 "auth": { 00:22:43.344 "state": "completed", 00:22:43.344 "digest": "sha512", 00:22:43.344 "dhgroup": "ffdhe8192" 00:22:43.344 } 00:22:43.344 } 00:22:43.344 ]' 00:22:43.344 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.344 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.344 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.344 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:43.344 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.604 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.604 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.604 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.863 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:22:43.863 20:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:01:ZjM3YjUyZGE4NGFiMmVkNTU5MjYzOTAzNjcxMjJiN2ZPa983: 00:22:44.797 20:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.798 20:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.798 20:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.798 20:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.798 20:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.798 20:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.798 20:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:44.798 20:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:45.056 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:45.056 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:45.056 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:45.056 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:45.056 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:45.056 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.056 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:45.056 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.056 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.056 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.056 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:45.056 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.056 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.988 00:22:45.988 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.989 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.989 20:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.247 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.247 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.247 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.247 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.247 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.247 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.247 { 00:22:46.247 "cntlid": 143, 00:22:46.247 "qid": 0, 00:22:46.247 "state": "enabled", 00:22:46.247 "thread": "nvmf_tgt_poll_group_000", 00:22:46.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:46.247 "listen_address": { 00:22:46.247 "trtype": "TCP", 00:22:46.247 "adrfam": "IPv4", 00:22:46.247 "traddr": "10.0.0.2", 00:22:46.247 "trsvcid": "4420" 00:22:46.247 }, 00:22:46.247 "peer_address": { 00:22:46.247 "trtype": "TCP", 00:22:46.247 "adrfam": "IPv4", 00:22:46.247 "traddr": "10.0.0.1", 00:22:46.247 "trsvcid": "55474" 00:22:46.247 }, 00:22:46.247 "auth": { 00:22:46.247 "state": "completed", 00:22:46.247 "digest": "sha512", 00:22:46.247 "dhgroup": "ffdhe8192" 00:22:46.247 } 00:22:46.247 } 00:22:46.247 ]' 00:22:46.247 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.247 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:46.247 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.505 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:46.505 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.505 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.505 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.505 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.763 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:22:46.763 20:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:22:47.714 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.714 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.714 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.714 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.714 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.714 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:47.714 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:47.714 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:47.714 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:47.714 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:47.714 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:47.972 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:47.972 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.972 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:47.972 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:47.972 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:47.972 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.972 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.972 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.972 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.972 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.972 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.972 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.972 20:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.906 00:22:48.906 20:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.906 20:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.906 20:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.165 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.165 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.165 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.165 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.165 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.165 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.165 { 00:22:49.165 "cntlid": 145, 00:22:49.165 "qid": 0, 00:22:49.165 "state": "enabled", 00:22:49.165 "thread": "nvmf_tgt_poll_group_000", 00:22:49.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:49.165 "listen_address": { 00:22:49.165 "trtype": "TCP", 00:22:49.165 "adrfam": "IPv4", 00:22:49.165 "traddr": "10.0.0.2", 00:22:49.165 "trsvcid": "4420" 00:22:49.165 }, 00:22:49.165 "peer_address": { 00:22:49.165 "trtype": "TCP", 00:22:49.165 "adrfam": "IPv4", 00:22:49.165 "traddr": "10.0.0.1", 00:22:49.165 "trsvcid": "43050" 00:22:49.165 }, 00:22:49.165 "auth": { 00:22:49.165 "state": "completed", 00:22:49.165 "digest": "sha512", 00:22:49.165 "dhgroup": "ffdhe8192" 00:22:49.165 } 00:22:49.165 } 00:22:49.165 ]' 00:22:49.165 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.165 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:49.165 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.423 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:49.423 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:49.423 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.423 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.423 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.682 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:22:49.682 20:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:Zjc5YTIzNjcwNjk0NDZiMjE1ZDFiOWExOTQxODdlZTYwY2ExZTM1YWIzOGI5ZmM3I7YPdw==: --dhchap-ctrl-secret DHHC-1:03:MTNkZDdkOWQwZmRkYmVkZTQ5MjE3MWE3ZmFjMzQ4YWYyYWJkNjc4OWMwYTY3YmEwNjdlOWRmMTIyYWE2MDIyMg8mpEw=: 00:22:50.617 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.617 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.617 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.617 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.617 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.617 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:50.617 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.617 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.617 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.617 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:50.617 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:50.617 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:50.617 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:50.618 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.618 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:50.618 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.618 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:50.618 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:50.618 20:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:51.553 request: 00:22:51.553 { 00:22:51.554 "name": "nvme0", 00:22:51.554 "trtype": "tcp", 00:22:51.554 "traddr": "10.0.0.2", 00:22:51.554 "adrfam": "ipv4", 00:22:51.554 "trsvcid": "4420", 00:22:51.554 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:51.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:51.554 "prchk_reftag": false, 00:22:51.554 "prchk_guard": false, 00:22:51.554 "hdgst": false, 00:22:51.554 "ddgst": false, 00:22:51.554 "dhchap_key": "key2", 00:22:51.554 "allow_unrecognized_csi": false, 00:22:51.554 "method": "bdev_nvme_attach_controller", 00:22:51.554 "req_id": 1 00:22:51.554 } 00:22:51.554 Got JSON-RPC error response 00:22:51.554 response: 00:22:51.554 { 00:22:51.554 "code": -5, 00:22:51.554 "message": "Input/output error" 00:22:51.554 } 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:51.554 20:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:52.121 request: 00:22:52.121 { 00:22:52.121 "name": "nvme0", 00:22:52.121 "trtype": "tcp", 00:22:52.121 "traddr": "10.0.0.2", 00:22:52.121 "adrfam": "ipv4", 00:22:52.121 "trsvcid": "4420", 00:22:52.121 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:52.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:52.121 "prchk_reftag": false, 00:22:52.121 "prchk_guard": false, 00:22:52.121 "hdgst": false, 00:22:52.121 "ddgst": false, 00:22:52.121 "dhchap_key": "key1", 00:22:52.121 "dhchap_ctrlr_key": "ckey2", 00:22:52.121 "allow_unrecognized_csi": false, 00:22:52.121 "method": "bdev_nvme_attach_controller", 00:22:52.121 "req_id": 1 00:22:52.121 } 00:22:52.121 Got JSON-RPC error response 00:22:52.121 response: 00:22:52.121 { 00:22:52.121 "code": -5, 00:22:52.121 "message": "Input/output error" 00:22:52.121 } 00:22:52.121 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:52.121 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:52.121 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:52.121 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:52.121 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:52.121 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.121 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.122 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.056 request: 00:22:53.056 { 00:22:53.056 "name": "nvme0", 00:22:53.056 "trtype": "tcp", 00:22:53.056 "traddr": "10.0.0.2", 00:22:53.056 "adrfam": "ipv4", 00:22:53.056 "trsvcid": "4420", 00:22:53.056 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:53.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:53.056 "prchk_reftag": false, 00:22:53.056 "prchk_guard": false, 00:22:53.056 "hdgst": false, 00:22:53.056 "ddgst": false, 00:22:53.056 "dhchap_key": "key1", 00:22:53.056 "dhchap_ctrlr_key": "ckey1", 00:22:53.056 "allow_unrecognized_csi": false, 00:22:53.056 "method": "bdev_nvme_attach_controller", 00:22:53.056 "req_id": 1 00:22:53.056 } 00:22:53.056 Got JSON-RPC error response 00:22:53.056 response: 00:22:53.056 { 00:22:53.056 "code": -5, 00:22:53.056 "message": "Input/output error" 00:22:53.056 } 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 241196 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 241196 ']' 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 241196 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 241196 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 241196' 00:22:53.056 killing process with pid 241196 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 241196 00:22:53.056 20:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 241196 00:22:53.315 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:53.315 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:53.315 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.315 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.315 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=264263 00:22:53.315 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:53.315 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 264263 00:22:53.315 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 264263 ']' 00:22:53.315 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.315 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.315 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.315 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.315 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.574 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.574 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:53.574 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:53.574 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.574 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.574 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.574 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:53.574 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 264263 00:22:53.574 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 264263 ']' 00:22:53.574 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.574 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.574 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.574 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.574 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.833 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.833 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:53.833 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:53.833 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.833 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.833 null0 00:22:54.091 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.091 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:54.091 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cjR 00:22:54.091 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.091 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.RQc ]] 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RQc 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.417 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Pdb ]] 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pdb 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wSj 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.t6y ]] 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t6y 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.iCE 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:54.092 20:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:55.470 nvme0n1 00:22:55.470 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.470 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.470 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.729 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.729 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.729 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.729 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.729 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.729 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.729 { 00:22:55.729 "cntlid": 1, 00:22:55.729 "qid": 0, 00:22:55.729 "state": "enabled", 00:22:55.729 "thread": "nvmf_tgt_poll_group_000", 00:22:55.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:55.729 "listen_address": { 00:22:55.729 "trtype": "TCP", 00:22:55.729 "adrfam": "IPv4", 00:22:55.729 "traddr": "10.0.0.2", 00:22:55.729 "trsvcid": "4420" 00:22:55.729 }, 00:22:55.729 "peer_address": { 00:22:55.729 "trtype": "TCP", 00:22:55.729 "adrfam": "IPv4", 00:22:55.729 "traddr": "10.0.0.1", 00:22:55.729 "trsvcid": "43122" 00:22:55.729 }, 00:22:55.729 "auth": { 00:22:55.729 "state": "completed", 00:22:55.729 "digest": "sha512", 00:22:55.729 "dhgroup": "ffdhe8192" 00:22:55.729 } 00:22:55.729 } 00:22:55.729 ]' 00:22:55.729 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.729 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:55.729 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:55.729 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:55.729 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.729 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.729 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.729 20:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.299 20:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:22:56.299 20:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:22:57.236 20:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.236 20:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:57.236 20:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.236 20:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.236 20:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.236 20:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:57.237 20:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.237 20:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.237 20:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.237 20:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:57.237 20:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:57.237 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:57.237 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:57.237 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:57.237 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:57.237 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.237 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:57.237 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.237 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:57.237 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:57.237 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:57.495 request: 00:22:57.495 { 00:22:57.495 "name": "nvme0", 00:22:57.495 "trtype": "tcp", 00:22:57.495 "traddr": "10.0.0.2", 00:22:57.495 "adrfam": "ipv4", 00:22:57.495 "trsvcid": "4420", 00:22:57.495 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:57.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:57.495 "prchk_reftag": false, 00:22:57.495 "prchk_guard": false, 00:22:57.495 "hdgst": false, 00:22:57.495 "ddgst": false, 00:22:57.495 "dhchap_key": "key3", 00:22:57.495 "allow_unrecognized_csi": false, 00:22:57.495 "method": "bdev_nvme_attach_controller", 00:22:57.495 "req_id": 1 00:22:57.495 } 00:22:57.495 Got JSON-RPC error response 00:22:57.495 response: 00:22:57.495 { 00:22:57.495 "code": -5, 00:22:57.495 "message": "Input/output error" 00:22:57.495 } 00:22:57.754 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:57.754 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.754 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.754 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.754 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:57.754 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:57.754 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:57.754 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:58.013 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:58.013 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:58.013 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:58.013 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:58.013 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.013 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:58.013 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.013 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:58.013 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:58.013 20:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:58.272 request: 00:22:58.272 { 00:22:58.272 "name": "nvme0", 00:22:58.272 "trtype": "tcp", 00:22:58.272 "traddr": "10.0.0.2", 00:22:58.272 "adrfam": "ipv4", 00:22:58.272 "trsvcid": "4420", 00:22:58.272 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:58.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:58.272 "prchk_reftag": false, 00:22:58.272 "prchk_guard": false, 00:22:58.272 "hdgst": false, 00:22:58.272 "ddgst": false, 00:22:58.272 "dhchap_key": "key3", 00:22:58.272 "allow_unrecognized_csi": false, 00:22:58.272 "method": "bdev_nvme_attach_controller", 00:22:58.272 "req_id": 1 00:22:58.272 } 00:22:58.272 Got JSON-RPC error response 00:22:58.272 response: 00:22:58.272 { 00:22:58.272 "code": -5, 00:22:58.272 "message": "Input/output error" 00:22:58.272 } 00:22:58.272 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:58.272 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:58.272 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:58.272 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:58.272 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:58.272 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:58.272 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:58.272 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:58.272 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:58.272 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:58.531 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:58.531 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.531 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.531 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.531 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:58.532 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.532 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.532 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.532 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:58.532 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:58.532 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:58.532 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:58.532 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.532 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:58.532 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.532 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:58.532 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:58.532 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:59.101 request: 00:22:59.101 { 00:22:59.101 "name": "nvme0", 00:22:59.101 "trtype": "tcp", 00:22:59.101 "traddr": "10.0.0.2", 00:22:59.101 "adrfam": "ipv4", 00:22:59.101 "trsvcid": "4420", 00:22:59.101 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:59.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:59.101 "prchk_reftag": false, 00:22:59.101 "prchk_guard": false, 00:22:59.102 "hdgst": false, 00:22:59.102 "ddgst": false, 00:22:59.102 "dhchap_key": "key0", 00:22:59.102 "dhchap_ctrlr_key": "key1", 00:22:59.102 "allow_unrecognized_csi": false, 00:22:59.102 "method": "bdev_nvme_attach_controller", 00:22:59.102 "req_id": 1 00:22:59.102 } 00:22:59.102 Got JSON-RPC error response 00:22:59.102 response: 00:22:59.102 { 00:22:59.102 "code": -5, 00:22:59.102 "message": "Input/output error" 00:22:59.102 } 00:22:59.102 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:59.102 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:59.102 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:59.102 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:59.102 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:59.102 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:59.102 20:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:59.360 nvme0n1 00:22:59.360 20:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:59.360 20:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:59.360 20:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.618 20:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.618 20:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.618 20:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.877 20:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:59.877 20:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.877 20:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.877 20:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.877 20:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:59.877 20:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:59.877 20:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:01.257 nvme0n1 00:23:01.257 20:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:01.257 20:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:01.257 20:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.516 20:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.516 20:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:01.516 20:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.516 20:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.516 20:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.516 20:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:01.516 20:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:01.516 20:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.775 20:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.775 20:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:23:01.775 20:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: --dhchap-ctrl-secret DHHC-1:03:OWVmNzI0M2U4YTY0ODQxYmIxODhiNWUzZGFkZjJkYjQ5MGE2ZjE3ZjdkMjBjNDMxYTk2YjNlNmQwZjdlYzZjYR5AmG8=: 00:23:02.714 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:02.714 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:02.714 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:02.714 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:02.714 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:02.714 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:02.714 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:02.714 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.714 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.972 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:02.972 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:02.972 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:02.972 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:02.972 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.972 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:02.972 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.972 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:02.972 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:02.972 20:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:03.911 request: 00:23:03.911 { 00:23:03.911 "name": "nvme0", 00:23:03.911 "trtype": "tcp", 00:23:03.911 "traddr": "10.0.0.2", 00:23:03.911 "adrfam": "ipv4", 00:23:03.911 "trsvcid": "4420", 00:23:03.911 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:03.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:03.911 "prchk_reftag": false, 00:23:03.911 "prchk_guard": false, 00:23:03.911 "hdgst": false, 00:23:03.911 "ddgst": false, 00:23:03.911 "dhchap_key": "key1", 00:23:03.911 "allow_unrecognized_csi": false, 00:23:03.911 "method": "bdev_nvme_attach_controller", 00:23:03.911 "req_id": 1 00:23:03.911 } 00:23:03.911 Got JSON-RPC error response 00:23:03.911 response: 00:23:03.911 { 00:23:03.911 "code": -5, 00:23:03.911 "message": "Input/output error" 00:23:03.911 } 00:23:03.911 20:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:03.911 20:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:03.911 20:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:03.911 20:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:03.911 20:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:03.911 20:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:03.911 20:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:05.288 nvme0n1 00:23:05.288 20:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:05.288 20:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:05.288 20:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.546 20:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.546 20:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.546 20:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.805 20:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:05.805 20:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.805 20:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.805 20:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.805 20:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:05.805 20:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:05.805 20:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:06.063 nvme0n1 00:23:06.322 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:06.322 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.322 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:06.580 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.580 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.580 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: '' 2s 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: ]] 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDAyNGU5Zjg0ZGVjMzkyNWU5NDhiZGRhZGM5ZDRjYTBjokZ+: 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:06.840 20:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: 2s 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: ]] 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OTUwYjcwZTBlMjAyZjBkMGYyNjc1N2RhMTEwNGM1MWNhYjQwNmVjZDY3NDBhMWEzEXejOw==: 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:08.746 20:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:11.288 20:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:12.227 nvme0n1 00:23:12.227 20:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:12.227 20:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.227 20:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.227 20:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.227 20:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:12.227 20:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:13.166 20:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:13.166 20:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:13.166 20:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.424 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.424 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:13.424 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.424 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.424 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.424 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:13.424 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:13.683 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:13.683 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:13.683 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.941 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.941 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:13.941 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.941 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.941 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.941 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:13.941 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:13.941 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:13.941 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:13.941 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.941 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:13.941 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.941 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:13.941 20:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:14.883 request: 00:23:14.883 { 00:23:14.883 "name": "nvme0", 00:23:14.883 "dhchap_key": "key1", 00:23:14.883 "dhchap_ctrlr_key": "key3", 00:23:14.883 "method": "bdev_nvme_set_keys", 00:23:14.883 "req_id": 1 00:23:14.883 } 00:23:14.883 Got JSON-RPC error response 00:23:14.883 response: 00:23:14.883 { 00:23:14.883 "code": -13, 00:23:14.883 "message": "Permission denied" 00:23:14.883 } 00:23:14.883 20:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:14.883 20:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:14.883 20:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:14.883 20:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:14.883 20:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:14.883 20:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:14.883 20:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.143 20:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:15.143 20:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:16.079 20:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:16.079 20:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:16.079 20:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.338 20:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:16.338 20:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:16.338 20:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.338 20:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.338 20:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.338 20:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:16.338 20:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:16.338 20:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:17.717 nvme0n1 00:23:17.717 20:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:17.717 20:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.717 20:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.717 20:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.717 20:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:17.717 20:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:17.717 20:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:17.717 20:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:17.717 20:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.717 20:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:17.717 20:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.717 20:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:17.717 20:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:18.651 request: 00:23:18.651 { 00:23:18.651 "name": "nvme0", 00:23:18.651 "dhchap_key": "key2", 00:23:18.651 "dhchap_ctrlr_key": "key0", 00:23:18.651 "method": "bdev_nvme_set_keys", 00:23:18.651 "req_id": 1 00:23:18.652 } 00:23:18.652 Got JSON-RPC error response 00:23:18.652 response: 00:23:18.652 { 00:23:18.652 "code": -13, 00:23:18.652 "message": "Permission denied" 00:23:18.652 } 00:23:18.652 20:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:18.652 20:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:18.652 20:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:18.652 20:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:18.652 20:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:18.652 20:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.652 20:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:18.910 20:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:18.910 20:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:19.848 20:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:19.848 20:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:19.848 20:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.109 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:20.109 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:20.109 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:20.109 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 241223 00:23:20.109 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 241223 ']' 00:23:20.109 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 241223 00:23:20.109 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:20.109 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.109 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 241223 00:23:20.109 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:20.109 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:20.109 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 241223' 00:23:20.109 killing process with pid 241223 00:23:20.109 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 241223 00:23:20.109 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 241223 00:23:20.679 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:20.679 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.679 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:20.679 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.679 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:20.679 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.679 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.679 rmmod nvme_tcp 00:23:20.679 rmmod nvme_fabrics 00:23:20.679 rmmod nvme_keyring 00:23:20.679 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.679 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:20.680 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:20.680 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 264263 ']' 00:23:20.680 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 264263 00:23:20.680 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 264263 ']' 00:23:20.680 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 264263 00:23:20.680 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:20.680 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.680 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 264263 00:23:20.680 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.680 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.680 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 264263' 00:23:20.680 killing process with pid 264263 00:23:20.680 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 264263 00:23:20.680 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 264263 00:23:20.939 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:20.939 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:20.939 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:20.939 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:20.939 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:23:20.939 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:20.939 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:20.939 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.939 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:20.939 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.939 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.939 20:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.843 20:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:22.843 20:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.cjR /tmp/spdk.key-sha256.417 /tmp/spdk.key-sha384.wSj /tmp/spdk.key-sha512.iCE /tmp/spdk.key-sha512.RQc /tmp/spdk.key-sha384.Pdb /tmp/spdk.key-sha256.t6y '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:22.843 00:23:22.843 real 3m33.325s 00:23:22.843 user 8m18.589s 00:23:22.843 sys 0m27.979s 00:23:22.843 20:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:22.843 20:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.843 ************************************ 00:23:22.843 END TEST nvmf_auth_target 00:23:22.843 ************************************ 00:23:22.843 20:24:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:22.843 20:24:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:22.843 20:24:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:22.843 20:24:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.843 20:24:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:23.102 ************************************ 00:23:23.102 START TEST nvmf_bdevio_no_huge 00:23:23.102 ************************************ 00:23:23.102 20:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:23.102 * Looking for test storage... 00:23:23.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:23.102 20:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:23.102 20:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:23:23.102 20:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.102 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:23.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.103 --rc genhtml_branch_coverage=1 00:23:23.103 --rc genhtml_function_coverage=1 00:23:23.103 --rc genhtml_legend=1 00:23:23.103 --rc geninfo_all_blocks=1 00:23:23.103 --rc geninfo_unexecuted_blocks=1 00:23:23.103 00:23:23.103 ' 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:23.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.103 --rc genhtml_branch_coverage=1 00:23:23.103 --rc genhtml_function_coverage=1 00:23:23.103 --rc genhtml_legend=1 00:23:23.103 --rc geninfo_all_blocks=1 00:23:23.103 --rc geninfo_unexecuted_blocks=1 00:23:23.103 00:23:23.103 ' 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:23.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.103 --rc genhtml_branch_coverage=1 00:23:23.103 --rc genhtml_function_coverage=1 00:23:23.103 --rc genhtml_legend=1 00:23:23.103 --rc geninfo_all_blocks=1 00:23:23.103 --rc geninfo_unexecuted_blocks=1 00:23:23.103 00:23:23.103 ' 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:23.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.103 --rc genhtml_branch_coverage=1 00:23:23.103 --rc genhtml_function_coverage=1 00:23:23.103 --rc genhtml_legend=1 00:23:23.103 --rc geninfo_all_blocks=1 00:23:23.103 --rc geninfo_unexecuted_blocks=1 00:23:23.103 00:23:23.103 ' 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.103 20:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:25.636 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:25.636 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:25.636 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:25.636 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.636 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:23:25.637 00:23:25.637 --- 10.0.0.2 ping statistics --- 00:23:25.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.637 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:23:25.637 00:23:25.637 --- 10.0.0.1 ping statistics --- 00:23:25.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.637 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=270023 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 270023 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 270023 ']' 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:25.637 [2024-11-18 20:24:37.331375] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:25.637 [2024-11-18 20:24:37.331459] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:25.637 [2024-11-18 20:24:37.403157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.637 [2024-11-18 20:24:37.446794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.637 [2024-11-18 20:24:37.446857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.637 [2024-11-18 20:24:37.446887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.637 [2024-11-18 20:24:37.446898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.637 [2024-11-18 20:24:37.446907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.637 [2024-11-18 20:24:37.447877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:25.637 [2024-11-18 20:24:37.447939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:25.637 [2024-11-18 20:24:37.448006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:25.637 [2024-11-18 20:24:37.448010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:25.637 [2024-11-18 20:24:37.591315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:25.637 Malloc0 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.637 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.638 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.638 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:25.638 [2024-11-18 20:24:37.629107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.638 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.638 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:25.638 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:25.638 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:23:25.638 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:23:25.638 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:25.638 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:25.638 { 00:23:25.638 "params": { 00:23:25.638 "name": "Nvme$subsystem", 00:23:25.638 "trtype": "$TEST_TRANSPORT", 00:23:25.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.638 "adrfam": "ipv4", 00:23:25.638 "trsvcid": "$NVMF_PORT", 00:23:25.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.638 "hdgst": ${hdgst:-false}, 00:23:25.638 "ddgst": ${ddgst:-false} 00:23:25.638 }, 00:23:25.638 "method": "bdev_nvme_attach_controller" 00:23:25.638 } 00:23:25.638 EOF 00:23:25.638 )") 00:23:25.638 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:23:25.638 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:23:25.638 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:23:25.638 20:24:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:25.638 "params": { 00:23:25.638 "name": "Nvme1", 00:23:25.638 "trtype": "tcp", 00:23:25.638 "traddr": "10.0.0.2", 00:23:25.638 "adrfam": "ipv4", 00:23:25.638 "trsvcid": "4420", 00:23:25.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:25.638 "hdgst": false, 00:23:25.638 "ddgst": false 00:23:25.638 }, 00:23:25.638 "method": "bdev_nvme_attach_controller" 00:23:25.638 }' 00:23:25.897 [2024-11-18 20:24:37.680008] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:25.897 [2024-11-18 20:24:37.680088] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid270068 ] 00:23:25.897 [2024-11-18 20:24:37.753630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:25.897 [2024-11-18 20:24:37.806098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.897 [2024-11-18 20:24:37.806151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.897 [2024-11-18 20:24:37.806155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.158 I/O targets: 00:23:26.158 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:26.158 00:23:26.158 00:23:26.158 CUnit - A unit testing framework for C - Version 2.1-3 00:23:26.158 http://cunit.sourceforge.net/ 00:23:26.158 00:23:26.158 00:23:26.158 Suite: bdevio tests on: Nvme1n1 00:23:26.158 Test: blockdev write read block ...passed 00:23:26.158 Test: blockdev write zeroes read block ...passed 00:23:26.158 Test: blockdev write zeroes read no split ...passed 00:23:26.419 Test: blockdev write zeroes read split ...passed 00:23:26.419 Test: blockdev write zeroes read split partial ...passed 00:23:26.419 Test: blockdev reset ...[2024-11-18 20:24:38.238122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:26.419 [2024-11-18 20:24:38.238232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dc6a0 (9): Bad file descriptor 00:23:26.419 [2024-11-18 20:24:38.293960] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:23:26.419 passed 00:23:26.419 Test: blockdev write read 8 blocks ...passed 00:23:26.419 Test: blockdev write read size > 128k ...passed 00:23:26.419 Test: blockdev write read invalid size ...passed 00:23:26.419 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:26.419 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:26.419 Test: blockdev write read max offset ...passed 00:23:26.419 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:26.678 Test: blockdev writev readv 8 blocks ...passed 00:23:26.678 Test: blockdev writev readv 30 x 1block ...passed 00:23:26.678 Test: blockdev writev readv block ...passed 00:23:26.678 Test: blockdev writev readv size > 128k ...passed 00:23:26.678 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:26.678 Test: blockdev comparev and writev ...[2024-11-18 20:24:38.549627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:26.678 [2024-11-18 20:24:38.549672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.678 [2024-11-18 20:24:38.549698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:26.678 [2024-11-18 20:24:38.549716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:26.678 [2024-11-18 20:24:38.550045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:26.678 [2024-11-18 20:24:38.550070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:26.678 [2024-11-18 20:24:38.550093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:26.678 [2024-11-18 20:24:38.550109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:26.678 [2024-11-18 20:24:38.550443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:26.678 [2024-11-18 20:24:38.550466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:26.678 [2024-11-18 20:24:38.550488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:26.678 [2024-11-18 20:24:38.550504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:26.678 [2024-11-18 20:24:38.550854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:26.678 [2024-11-18 20:24:38.550878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:26.678 [2024-11-18 20:24:38.550899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:26.678 [2024-11-18 20:24:38.550915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:26.678 passed 00:23:26.678 Test: blockdev nvme passthru rw ...passed 00:23:26.678 Test: blockdev nvme passthru vendor specific ...[2024-11-18 20:24:38.634890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:26.678 [2024-11-18 20:24:38.634928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:26.678 [2024-11-18 20:24:38.635076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:26.678 [2024-11-18 20:24:38.635099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:26.678 [2024-11-18 20:24:38.635242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:26.678 [2024-11-18 20:24:38.635265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:26.678 [2024-11-18 20:24:38.635408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:26.679 [2024-11-18 20:24:38.635431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:26.679 passed 00:23:26.679 Test: blockdev nvme admin passthru ...passed 00:23:26.936 Test: blockdev copy ...passed 00:23:26.936 00:23:26.936 Run Summary: Type Total Ran Passed Failed Inactive 00:23:26.936 suites 1 1 n/a 0 0 00:23:26.936 tests 23 23 23 0 0 00:23:26.936 asserts 152 152 152 0 n/a 00:23:26.936 00:23:26.936 Elapsed time = 1.309 seconds 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:27.194 rmmod nvme_tcp 00:23:27.194 rmmod nvme_fabrics 00:23:27.194 rmmod nvme_keyring 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 270023 ']' 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 270023 00:23:27.194 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 270023 ']' 00:23:27.195 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 270023 00:23:27.195 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:23:27.195 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.195 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 270023 00:23:27.195 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:23:27.195 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:23:27.195 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 270023' 00:23:27.195 killing process with pid 270023 00:23:27.195 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 270023 00:23:27.195 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 270023 00:23:27.452 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:27.452 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:27.452 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:27.452 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:27.452 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:23:27.452 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:27.452 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:23:27.452 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.452 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:27.452 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.452 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.452 20:24:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:29.992 00:23:29.992 real 0m6.622s 00:23:29.992 user 0m10.876s 00:23:29.992 sys 0m2.610s 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:29.992 ************************************ 00:23:29.992 END TEST nvmf_bdevio_no_huge 00:23:29.992 ************************************ 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:29.992 ************************************ 00:23:29.992 START TEST nvmf_tls 00:23:29.992 ************************************ 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:29.992 * Looking for test storage... 00:23:29.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:29.992 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:29.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.993 --rc genhtml_branch_coverage=1 00:23:29.993 --rc genhtml_function_coverage=1 00:23:29.993 --rc genhtml_legend=1 00:23:29.993 --rc geninfo_all_blocks=1 00:23:29.993 --rc geninfo_unexecuted_blocks=1 00:23:29.993 00:23:29.993 ' 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:29.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.993 --rc genhtml_branch_coverage=1 00:23:29.993 --rc genhtml_function_coverage=1 00:23:29.993 --rc genhtml_legend=1 00:23:29.993 --rc geninfo_all_blocks=1 00:23:29.993 --rc geninfo_unexecuted_blocks=1 00:23:29.993 00:23:29.993 ' 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:29.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.993 --rc genhtml_branch_coverage=1 00:23:29.993 --rc genhtml_function_coverage=1 00:23:29.993 --rc genhtml_legend=1 00:23:29.993 --rc geninfo_all_blocks=1 00:23:29.993 --rc geninfo_unexecuted_blocks=1 00:23:29.993 00:23:29.993 ' 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:29.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.993 --rc genhtml_branch_coverage=1 00:23:29.993 --rc genhtml_function_coverage=1 00:23:29.993 --rc genhtml_legend=1 00:23:29.993 --rc geninfo_all_blocks=1 00:23:29.993 --rc geninfo_unexecuted_blocks=1 00:23:29.993 00:23:29.993 ' 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:29.993 20:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:31.896 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:31.896 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:31.896 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.896 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:31.897 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:31.897 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.156 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.156 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.156 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:32.156 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.156 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.156 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.156 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:32.156 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:32.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:23:32.156 00:23:32.156 --- 10.0.0.2 ping statistics --- 00:23:32.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.156 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:23:32.156 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:23:32.156 00:23:32.156 --- 10.0.0.1 ping statistics --- 00:23:32.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.156 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:23:32.156 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.156 20:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=272246 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 272246 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 272246 ']' 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.156 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.156 [2024-11-18 20:24:44.077635] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:32.156 [2024-11-18 20:24:44.077749] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.156 [2024-11-18 20:24:44.155878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.415 [2024-11-18 20:24:44.200433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.415 [2024-11-18 20:24:44.200482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.415 [2024-11-18 20:24:44.200510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.415 [2024-11-18 20:24:44.200522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.415 [2024-11-18 20:24:44.200531] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.415 [2024-11-18 20:24:44.201118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.415 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.415 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:32.415 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:32.415 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:32.415 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.415 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.415 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:32.415 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:32.673 true 00:23:32.673 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:32.673 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:32.933 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:32.933 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:32.933 20:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:33.192 20:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:33.192 20:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:33.450 20:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:33.450 20:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:33.450 20:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:33.709 20:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:33.709 20:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:33.968 20:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:33.968 20:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:33.968 20:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:33.968 20:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:34.536 20:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:34.536 20:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:34.536 20:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:34.536 20:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:34.536 20:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:34.796 20:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:34.796 20:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:34.796 20:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:35.054 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:35.054 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.dqQDTyk177 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:35.620 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.issWGZ7FjK 00:23:35.621 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:35.621 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:35.621 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.dqQDTyk177 00:23:35.621 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.issWGZ7FjK 00:23:35.621 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:35.879 20:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:36.138 20:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.dqQDTyk177 00:23:36.138 20:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dqQDTyk177 00:23:36.138 20:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:36.398 [2024-11-18 20:24:48.320237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.398 20:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:36.659 20:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:36.917 [2024-11-18 20:24:48.857703] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.917 [2024-11-18 20:24:48.858003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.917 20:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:37.175 malloc0 00:23:37.175 20:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:37.432 20:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dqQDTyk177 00:23:37.691 20:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.261 20:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.dqQDTyk177 00:23:48.247 Initializing NVMe Controllers 00:23:48.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:48.247 Initialization complete. Launching workers. 00:23:48.247 ======================================================== 00:23:48.247 Latency(us) 00:23:48.247 Device Information : IOPS MiB/s Average min max 00:23:48.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8750.88 34.18 7315.64 1344.55 10047.15 00:23:48.247 ======================================================== 00:23:48.247 Total : 8750.88 34.18 7315.64 1344.55 10047.15 00:23:48.247 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dqQDTyk177 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dqQDTyk177 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=274144 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 274144 /var/tmp/bdevperf.sock 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 274144 ']' 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.247 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.247 [2024-11-18 20:25:00.110016] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:48.247 [2024-11-18 20:25:00.110100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274144 ] 00:23:48.247 [2024-11-18 20:25:00.176884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.247 [2024-11-18 20:25:00.222989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.505 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.505 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:48.505 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dqQDTyk177 00:23:48.761 20:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:49.019 [2024-11-18 20:25:00.958753] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.279 TLSTESTn1 00:23:49.279 20:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:49.279 Running I/O for 10 seconds... 00:23:51.162 3437.00 IOPS, 13.43 MiB/s [2024-11-18T19:25:04.550Z] 3430.50 IOPS, 13.40 MiB/s [2024-11-18T19:25:05.489Z] 3447.00 IOPS, 13.46 MiB/s [2024-11-18T19:25:06.428Z] 3450.75 IOPS, 13.48 MiB/s [2024-11-18T19:25:07.366Z] 3453.40 IOPS, 13.49 MiB/s [2024-11-18T19:25:08.304Z] 3462.17 IOPS, 13.52 MiB/s [2024-11-18T19:25:09.245Z] 3463.57 IOPS, 13.53 MiB/s [2024-11-18T19:25:10.179Z] 3473.25 IOPS, 13.57 MiB/s [2024-11-18T19:25:11.559Z] 3469.11 IOPS, 13.55 MiB/s [2024-11-18T19:25:11.559Z] 3459.60 IOPS, 13.51 MiB/s 00:23:59.551 Latency(us) 00:23:59.551 [2024-11-18T19:25:11.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.551 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:59.551 Verification LBA range: start 0x0 length 0x2000 00:23:59.551 TLSTESTn1 : 10.02 3465.35 13.54 0.00 0.00 36874.81 6165.24 37671.06 00:23:59.551 [2024-11-18T19:25:11.559Z] =================================================================================================================== 00:23:59.551 [2024-11-18T19:25:11.559Z] Total : 3465.35 13.54 0.00 0.00 36874.81 6165.24 37671.06 00:23:59.551 { 00:23:59.551 "results": [ 00:23:59.551 { 00:23:59.551 "job": "TLSTESTn1", 00:23:59.551 "core_mask": "0x4", 00:23:59.551 "workload": "verify", 00:23:59.551 "status": "finished", 00:23:59.551 "verify_range": { 00:23:59.551 "start": 0, 00:23:59.551 "length": 8192 00:23:59.551 }, 00:23:59.551 "queue_depth": 128, 00:23:59.551 "io_size": 4096, 00:23:59.551 "runtime": 10.020064, 00:23:59.551 "iops": 3465.3471275233373, 00:23:59.551 "mibps": 13.536512216888037, 00:23:59.551 "io_failed": 0, 00:23:59.551 "io_timeout": 0, 00:23:59.551 "avg_latency_us": 36874.81354730187, 00:23:59.551 "min_latency_us": 6165.2385185185185, 00:23:59.551 "max_latency_us": 37671.0637037037 00:23:59.551 } 00:23:59.551 ], 00:23:59.551 "core_count": 1 00:23:59.551 } 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 274144 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 274144 ']' 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 274144 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274144 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274144' 00:23:59.551 killing process with pid 274144 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 274144 00:23:59.551 Received shutdown signal, test time was about 10.000000 seconds 00:23:59.551 00:23:59.551 Latency(us) 00:23:59.551 [2024-11-18T19:25:11.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.551 [2024-11-18T19:25:11.559Z] =================================================================================================================== 00:23:59.551 [2024-11-18T19:25:11.559Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 274144 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.issWGZ7FjK 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:59.551 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.issWGZ7FjK 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.issWGZ7FjK 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.issWGZ7FjK 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275476 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275476 /var/tmp/bdevperf.sock 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275476 ']' 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.552 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.552 [2024-11-18 20:25:11.501067] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:59.552 [2024-11-18 20:25:11.501151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275476 ] 00:23:59.809 [2024-11-18 20:25:11.568410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.809 [2024-11-18 20:25:11.612879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.809 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.809 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:59.809 20:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.issWGZ7FjK 00:24:00.067 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:00.326 [2024-11-18 20:25:12.271402] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.326 [2024-11-18 20:25:12.276923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:00.326 [2024-11-18 20:25:12.277448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2143370 (107): Transport endpoint is not connected 00:24:00.326 [2024-11-18 20:25:12.278440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2143370 (9): Bad file descriptor 00:24:00.326 [2024-11-18 20:25:12.279440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:00.326 [2024-11-18 20:25:12.279460] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:00.326 [2024-11-18 20:25:12.279474] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:00.326 [2024-11-18 20:25:12.279492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:00.326 request: 00:24:00.326 { 00:24:00.326 "name": "TLSTEST", 00:24:00.326 "trtype": "tcp", 00:24:00.326 "traddr": "10.0.0.2", 00:24:00.326 "adrfam": "ipv4", 00:24:00.326 "trsvcid": "4420", 00:24:00.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:00.326 "prchk_reftag": false, 00:24:00.326 "prchk_guard": false, 00:24:00.326 "hdgst": false, 00:24:00.326 "ddgst": false, 00:24:00.326 "psk": "key0", 00:24:00.326 "allow_unrecognized_csi": false, 00:24:00.326 "method": "bdev_nvme_attach_controller", 00:24:00.326 "req_id": 1 00:24:00.326 } 00:24:00.326 Got JSON-RPC error response 00:24:00.326 response: 00:24:00.326 { 00:24:00.326 "code": -5, 00:24:00.326 "message": "Input/output error" 00:24:00.326 } 00:24:00.326 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275476 00:24:00.326 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275476 ']' 00:24:00.326 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275476 00:24:00.326 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:00.326 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.326 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275476 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275476' 00:24:00.585 killing process with pid 275476 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275476 00:24:00.585 Received shutdown signal, test time was about 10.000000 seconds 00:24:00.585 00:24:00.585 Latency(us) 00:24:00.585 [2024-11-18T19:25:12.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.585 [2024-11-18T19:25:12.593Z] =================================================================================================================== 00:24:00.585 [2024-11-18T19:25:12.593Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275476 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dqQDTyk177 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dqQDTyk177 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:00.585 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dqQDTyk177 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dqQDTyk177 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275622 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275622 /var/tmp/bdevperf.sock 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275622 ']' 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.586 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.586 [2024-11-18 20:25:12.573256] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:00.586 [2024-11-18 20:25:12.573353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275622 ] 00:24:00.844 [2024-11-18 20:25:12.640100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.844 [2024-11-18 20:25:12.683322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.844 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.844 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:00.844 20:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dqQDTyk177 00:24:01.101 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:01.359 [2024-11-18 20:25:13.322525] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.359 [2024-11-18 20:25:13.328016] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:01.359 [2024-11-18 20:25:13.328048] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:01.359 [2024-11-18 20:25:13.328100] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:01.359 [2024-11-18 20:25:13.328686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12da370 (107): Transport endpoint is not connected 00:24:01.359 [2024-11-18 20:25:13.329676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12da370 (9): Bad file descriptor 00:24:01.359 [2024-11-18 20:25:13.330682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:01.359 [2024-11-18 20:25:13.330703] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:01.359 [2024-11-18 20:25:13.330717] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:01.359 [2024-11-18 20:25:13.330736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:01.359 request: 00:24:01.359 { 00:24:01.359 "name": "TLSTEST", 00:24:01.359 "trtype": "tcp", 00:24:01.359 "traddr": "10.0.0.2", 00:24:01.359 "adrfam": "ipv4", 00:24:01.359 "trsvcid": "4420", 00:24:01.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.359 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:01.359 "prchk_reftag": false, 00:24:01.359 "prchk_guard": false, 00:24:01.359 "hdgst": false, 00:24:01.359 "ddgst": false, 00:24:01.359 "psk": "key0", 00:24:01.359 "allow_unrecognized_csi": false, 00:24:01.359 "method": "bdev_nvme_attach_controller", 00:24:01.359 "req_id": 1 00:24:01.359 } 00:24:01.359 Got JSON-RPC error response 00:24:01.359 response: 00:24:01.359 { 00:24:01.359 "code": -5, 00:24:01.359 "message": "Input/output error" 00:24:01.359 } 00:24:01.359 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275622 00:24:01.359 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275622 ']' 00:24:01.359 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275622 00:24:01.359 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:01.359 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.359 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275622 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275622' 00:24:01.617 killing process with pid 275622 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275622 00:24:01.617 Received shutdown signal, test time was about 10.000000 seconds 00:24:01.617 00:24:01.617 Latency(us) 00:24:01.617 [2024-11-18T19:25:13.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.617 [2024-11-18T19:25:13.625Z] =================================================================================================================== 00:24:01.617 [2024-11-18T19:25:13.625Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275622 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dqQDTyk177 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dqQDTyk177 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dqQDTyk177 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dqQDTyk177 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275766 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275766 /var/tmp/bdevperf.sock 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275766 ']' 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.617 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.617 [2024-11-18 20:25:13.596321] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:01.617 [2024-11-18 20:25:13.596396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275766 ] 00:24:01.874 [2024-11-18 20:25:13.663733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.874 [2024-11-18 20:25:13.710416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.874 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.874 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:01.874 20:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dqQDTyk177 00:24:02.132 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:02.393 [2024-11-18 20:25:14.378272] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:02.393 [2024-11-18 20:25:14.389902] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:02.393 [2024-11-18 20:25:14.389946] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:02.393 [2024-11-18 20:25:14.389997] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:02.393 [2024-11-18 20:25:14.390271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2b370 (107): Transport endpoint is not connected 00:24:02.393 [2024-11-18 20:25:14.391261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2b370 (9): Bad file descriptor 00:24:02.393 [2024-11-18 20:25:14.392260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:24:02.393 [2024-11-18 20:25:14.392286] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:02.393 [2024-11-18 20:25:14.392300] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:02.393 [2024-11-18 20:25:14.392318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:24:02.393 request: 00:24:02.393 { 00:24:02.393 "name": "TLSTEST", 00:24:02.393 "trtype": "tcp", 00:24:02.393 "traddr": "10.0.0.2", 00:24:02.393 "adrfam": "ipv4", 00:24:02.393 "trsvcid": "4420", 00:24:02.393 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:02.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.393 "prchk_reftag": false, 00:24:02.393 "prchk_guard": false, 00:24:02.393 "hdgst": false, 00:24:02.393 "ddgst": false, 00:24:02.393 "psk": "key0", 00:24:02.393 "allow_unrecognized_csi": false, 00:24:02.393 "method": "bdev_nvme_attach_controller", 00:24:02.393 "req_id": 1 00:24:02.393 } 00:24:02.393 Got JSON-RPC error response 00:24:02.393 response: 00:24:02.393 { 00:24:02.393 "code": -5, 00:24:02.393 "message": "Input/output error" 00:24:02.393 } 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275766 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275766 ']' 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275766 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275766 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275766' 00:24:02.652 killing process with pid 275766 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275766 00:24:02.652 Received shutdown signal, test time was about 10.000000 seconds 00:24:02.652 00:24:02.652 Latency(us) 00:24:02.652 [2024-11-18T19:25:14.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.652 [2024-11-18T19:25:14.660Z] =================================================================================================================== 00:24:02.652 [2024-11-18T19:25:14.660Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275766 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275904 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275904 /var/tmp/bdevperf.sock 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275904 ']' 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.652 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.912 [2024-11-18 20:25:14.690845] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:02.912 [2024-11-18 20:25:14.690943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275904 ] 00:24:02.912 [2024-11-18 20:25:14.756566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.912 [2024-11-18 20:25:14.799467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.912 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.912 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:02.912 20:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:03.170 [2024-11-18 20:25:15.177068] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:03.170 [2024-11-18 20:25:15.177106] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:03.428 request: 00:24:03.428 { 00:24:03.428 "name": "key0", 00:24:03.428 "path": "", 00:24:03.428 "method": "keyring_file_add_key", 00:24:03.428 "req_id": 1 00:24:03.428 } 00:24:03.428 Got JSON-RPC error response 00:24:03.428 response: 00:24:03.428 { 00:24:03.428 "code": -1, 00:24:03.429 "message": "Operation not permitted" 00:24:03.429 } 00:24:03.429 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:03.687 [2024-11-18 20:25:15.453908] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.687 [2024-11-18 20:25:15.453966] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:03.687 request: 00:24:03.687 { 00:24:03.687 "name": "TLSTEST", 00:24:03.687 "trtype": "tcp", 00:24:03.687 "traddr": "10.0.0.2", 00:24:03.687 "adrfam": "ipv4", 00:24:03.687 "trsvcid": "4420", 00:24:03.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.687 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.687 "prchk_reftag": false, 00:24:03.687 "prchk_guard": false, 00:24:03.687 "hdgst": false, 00:24:03.687 "ddgst": false, 00:24:03.687 "psk": "key0", 00:24:03.687 "allow_unrecognized_csi": false, 00:24:03.687 "method": "bdev_nvme_attach_controller", 00:24:03.687 "req_id": 1 00:24:03.687 } 00:24:03.687 Got JSON-RPC error response 00:24:03.687 response: 00:24:03.687 { 00:24:03.687 "code": -126, 00:24:03.687 "message": "Required key not available" 00:24:03.687 } 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275904 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275904 ']' 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275904 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275904 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275904' 00:24:03.687 killing process with pid 275904 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275904 00:24:03.687 Received shutdown signal, test time was about 10.000000 seconds 00:24:03.687 00:24:03.687 Latency(us) 00:24:03.687 [2024-11-18T19:25:15.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.687 [2024-11-18T19:25:15.695Z] =================================================================================================================== 00:24:03.687 [2024-11-18T19:25:15.695Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275904 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 272246 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 272246 ']' 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 272246 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.687 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 272246 00:24:03.945 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:03.945 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:03.945 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 272246' 00:24:03.945 killing process with pid 272246 00:24:03.945 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 272246 00:24:03.945 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 272246 00:24:03.945 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:03.945 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:03.945 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.945 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:03.945 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:03.945 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:24:03.945 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.pEwW7BeB6i 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.pEwW7BeB6i 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=276057 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 276057 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 276057 ']' 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.203 20:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.203 [2024-11-18 20:25:16.016689] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:04.203 [2024-11-18 20:25:16.016803] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.203 [2024-11-18 20:25:16.086132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.203 [2024-11-18 20:25:16.125993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.203 [2024-11-18 20:25:16.126055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.203 [2024-11-18 20:25:16.126079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.203 [2024-11-18 20:25:16.126089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.203 [2024-11-18 20:25:16.126098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.203 [2024-11-18 20:25:16.126654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.461 20:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.461 20:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:04.461 20:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:04.461 20:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.461 20:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.461 20:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.461 20:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.pEwW7BeB6i 00:24:04.461 20:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pEwW7BeB6i 00:24:04.461 20:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:04.719 [2024-11-18 20:25:16.514257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.719 20:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:04.977 20:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:05.235 [2024-11-18 20:25:17.039696] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:05.235 [2024-11-18 20:25:17.039943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.235 20:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:05.493 malloc0 00:24:05.493 20:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:05.751 20:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pEwW7BeB6i 00:24:06.009 20:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pEwW7BeB6i 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pEwW7BeB6i 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=276357 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 276357 /var/tmp/bdevperf.sock 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 276357 ']' 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.268 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.268 [2024-11-18 20:25:18.183838] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:06.268 [2024-11-18 20:25:18.183931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276357 ] 00:24:06.268 [2024-11-18 20:25:18.249323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.526 [2024-11-18 20:25:18.294883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.526 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.526 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:06.526 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pEwW7BeB6i 00:24:06.784 20:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:07.042 [2024-11-18 20:25:18.957651] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.042 TLSTESTn1 00:24:07.042 20:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:07.309 Running I/O for 10 seconds... 00:24:09.185 3268.00 IOPS, 12.77 MiB/s [2024-11-18T19:25:22.574Z] 3349.50 IOPS, 13.08 MiB/s [2024-11-18T19:25:23.513Z] 3412.33 IOPS, 13.33 MiB/s [2024-11-18T19:25:24.454Z] 3423.00 IOPS, 13.37 MiB/s [2024-11-18T19:25:25.392Z] 3425.20 IOPS, 13.38 MiB/s [2024-11-18T19:25:26.331Z] 3427.17 IOPS, 13.39 MiB/s [2024-11-18T19:25:27.267Z] 3420.14 IOPS, 13.36 MiB/s [2024-11-18T19:25:28.207Z] 3411.38 IOPS, 13.33 MiB/s [2024-11-18T19:25:29.588Z] 3419.67 IOPS, 13.36 MiB/s [2024-11-18T19:25:29.588Z] 3422.50 IOPS, 13.37 MiB/s 00:24:17.580 Latency(us) 00:24:17.580 [2024-11-18T19:25:29.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.580 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:17.580 Verification LBA range: start 0x0 length 0x2000 00:24:17.580 TLSTESTn1 : 10.02 3429.00 13.39 0.00 0.00 37269.95 5971.06 50098.63 00:24:17.580 [2024-11-18T19:25:29.588Z] =================================================================================================================== 00:24:17.580 [2024-11-18T19:25:29.588Z] Total : 3429.00 13.39 0.00 0.00 37269.95 5971.06 50098.63 00:24:17.580 { 00:24:17.580 "results": [ 00:24:17.580 { 00:24:17.580 "job": "TLSTESTn1", 00:24:17.580 "core_mask": "0x4", 00:24:17.580 "workload": "verify", 00:24:17.580 "status": "finished", 00:24:17.580 "verify_range": { 00:24:17.580 "start": 0, 00:24:17.580 "length": 8192 00:24:17.580 }, 00:24:17.580 "queue_depth": 128, 00:24:17.580 "io_size": 4096, 00:24:17.580 "runtime": 10.017791, 00:24:17.580 "iops": 3428.9994670481747, 00:24:17.580 "mibps": 13.394529168156932, 00:24:17.580 "io_failed": 0, 00:24:17.580 "io_timeout": 0, 00:24:17.580 "avg_latency_us": 37269.95223937628, 00:24:17.580 "min_latency_us": 5971.057777777778, 00:24:17.580 "max_latency_us": 50098.63111111111 00:24:17.580 } 00:24:17.580 ], 00:24:17.580 "core_count": 1 00:24:17.580 } 00:24:17.580 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.580 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 276357 00:24:17.580 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 276357 ']' 00:24:17.580 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 276357 00:24:17.580 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:17.580 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.580 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 276357 00:24:17.580 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:17.580 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:17.580 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 276357' 00:24:17.580 killing process with pid 276357 00:24:17.580 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 276357 00:24:17.580 Received shutdown signal, test time was about 10.000000 seconds 00:24:17.580 00:24:17.580 Latency(us) 00:24:17.580 [2024-11-18T19:25:29.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.580 [2024-11-18T19:25:29.589Z] =================================================================================================================== 00:24:17.581 [2024-11-18T19:25:29.589Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 276357 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.pEwW7BeB6i 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pEwW7BeB6i 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pEwW7BeB6i 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pEwW7BeB6i 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pEwW7BeB6i 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=277673 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 277673 /var/tmp/bdevperf.sock 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277673 ']' 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.581 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.581 [2024-11-18 20:25:29.495455] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:17.581 [2024-11-18 20:25:29.495550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277673 ] 00:24:17.581 [2024-11-18 20:25:29.560434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.839 [2024-11-18 20:25:29.604627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.839 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.839 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:17.839 20:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pEwW7BeB6i 00:24:18.098 [2024-11-18 20:25:29.982128] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pEwW7BeB6i': 0100666 00:24:18.098 [2024-11-18 20:25:29.982167] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:18.098 request: 00:24:18.098 { 00:24:18.098 "name": "key0", 00:24:18.098 "path": "/tmp/tmp.pEwW7BeB6i", 00:24:18.098 "method": "keyring_file_add_key", 00:24:18.098 "req_id": 1 00:24:18.098 } 00:24:18.098 Got JSON-RPC error response 00:24:18.098 response: 00:24:18.098 { 00:24:18.098 "code": -1, 00:24:18.098 "message": "Operation not permitted" 00:24:18.098 } 00:24:18.098 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:18.356 [2024-11-18 20:25:30.263066] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:18.356 [2024-11-18 20:25:30.263141] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:18.356 request: 00:24:18.356 { 00:24:18.356 "name": "TLSTEST", 00:24:18.356 "trtype": "tcp", 00:24:18.356 "traddr": "10.0.0.2", 00:24:18.356 "adrfam": "ipv4", 00:24:18.356 "trsvcid": "4420", 00:24:18.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.356 "prchk_reftag": false, 00:24:18.356 "prchk_guard": false, 00:24:18.356 "hdgst": false, 00:24:18.356 "ddgst": false, 00:24:18.356 "psk": "key0", 00:24:18.356 "allow_unrecognized_csi": false, 00:24:18.356 "method": "bdev_nvme_attach_controller", 00:24:18.356 "req_id": 1 00:24:18.356 } 00:24:18.356 Got JSON-RPC error response 00:24:18.356 response: 00:24:18.356 { 00:24:18.356 "code": -126, 00:24:18.356 "message": "Required key not available" 00:24:18.356 } 00:24:18.356 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 277673 00:24:18.356 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277673 ']' 00:24:18.356 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277673 00:24:18.356 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:18.356 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.356 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277673 00:24:18.356 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:18.356 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:18.356 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277673' 00:24:18.356 killing process with pid 277673 00:24:18.356 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277673 00:24:18.357 Received shutdown signal, test time was about 10.000000 seconds 00:24:18.357 00:24:18.357 Latency(us) 00:24:18.357 [2024-11-18T19:25:30.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.357 [2024-11-18T19:25:30.365Z] =================================================================================================================== 00:24:18.357 [2024-11-18T19:25:30.365Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:18.357 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277673 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 276057 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 276057 ']' 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 276057 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 276057 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 276057' 00:24:18.615 killing process with pid 276057 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 276057 00:24:18.615 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 276057 00:24:18.874 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:18.874 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:18.874 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:18.874 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.874 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=277820 00:24:18.874 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:18.874 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 277820 00:24:18.874 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277820 ']' 00:24:18.874 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.874 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.874 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.874 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.874 20:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.874 [2024-11-18 20:25:30.769567] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:18.875 [2024-11-18 20:25:30.769666] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.875 [2024-11-18 20:25:30.844543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.134 [2024-11-18 20:25:30.891211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.134 [2024-11-18 20:25:30.891264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.134 [2024-11-18 20:25:30.891278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.134 [2024-11-18 20:25:30.891288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.134 [2024-11-18 20:25:30.891298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.134 [2024-11-18 20:25:30.891854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.pEwW7BeB6i 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.pEwW7BeB6i 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.pEwW7BeB6i 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pEwW7BeB6i 00:24:19.134 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:19.393 [2024-11-18 20:25:31.335725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.393 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:19.959 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:19.959 [2024-11-18 20:25:31.921306] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.959 [2024-11-18 20:25:31.921556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.959 20:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:20.525 malloc0 00:24:20.525 20:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:20.783 20:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pEwW7BeB6i 00:24:21.041 [2024-11-18 20:25:32.794165] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pEwW7BeB6i': 0100666 00:24:21.041 [2024-11-18 20:25:32.794204] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:21.041 request: 00:24:21.041 { 00:24:21.041 "name": "key0", 00:24:21.041 "path": "/tmp/tmp.pEwW7BeB6i", 00:24:21.041 "method": "keyring_file_add_key", 00:24:21.041 "req_id": 1 00:24:21.041 } 00:24:21.041 Got JSON-RPC error response 00:24:21.041 response: 00:24:21.041 { 00:24:21.041 "code": -1, 00:24:21.041 "message": "Operation not permitted" 00:24:21.041 } 00:24:21.041 20:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:21.300 [2024-11-18 20:25:33.111051] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:21.300 [2024-11-18 20:25:33.111094] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:21.300 request: 00:24:21.300 { 00:24:21.300 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.300 "host": "nqn.2016-06.io.spdk:host1", 00:24:21.300 "psk": "key0", 00:24:21.300 "method": "nvmf_subsystem_add_host", 00:24:21.300 "req_id": 1 00:24:21.300 } 00:24:21.300 Got JSON-RPC error response 00:24:21.300 response: 00:24:21.300 { 00:24:21.300 "code": -32603, 00:24:21.300 "message": "Internal error" 00:24:21.300 } 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 277820 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277820 ']' 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277820 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277820 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277820' 00:24:21.300 killing process with pid 277820 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277820 00:24:21.300 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277820 00:24:21.559 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.pEwW7BeB6i 00:24:21.559 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:21.559 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:21.559 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.559 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.559 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=278234 00:24:21.559 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:21.559 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 278234 00:24:21.559 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278234 ']' 00:24:21.559 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.559 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.559 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.559 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.559 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.559 [2024-11-18 20:25:33.430503] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:21.559 [2024-11-18 20:25:33.430617] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.559 [2024-11-18 20:25:33.504409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.559 [2024-11-18 20:25:33.549756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.559 [2024-11-18 20:25:33.549802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.559 [2024-11-18 20:25:33.549826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.560 [2024-11-18 20:25:33.549838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.560 [2024-11-18 20:25:33.549848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.560 [2024-11-18 20:25:33.550410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.818 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.818 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:21.818 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:21.818 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.818 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.818 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.818 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.pEwW7BeB6i 00:24:21.819 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pEwW7BeB6i 00:24:21.819 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:22.077 [2024-11-18 20:25:33.930518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.077 20:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:22.336 20:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:22.593 [2024-11-18 20:25:34.467966] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.593 [2024-11-18 20:25:34.468228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.593 20:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:22.851 malloc0 00:24:22.851 20:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:23.108 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pEwW7BeB6i 00:24:23.366 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:23.623 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=278467 00:24:23.623 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:23.623 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:23.623 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 278467 /var/tmp/bdevperf.sock 00:24:23.623 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278467 ']' 00:24:23.623 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.623 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.623 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.623 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.623 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.623 [2024-11-18 20:25:35.596714] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:23.623 [2024-11-18 20:25:35.596794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278467 ] 00:24:23.881 [2024-11-18 20:25:35.665449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.881 [2024-11-18 20:25:35.718234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.881 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.881 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:23.881 20:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pEwW7BeB6i 00:24:24.138 20:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:24.397 [2024-11-18 20:25:36.348023] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:24.655 TLSTESTn1 00:24:24.655 20:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:24.914 20:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:24.914 "subsystems": [ 00:24:24.914 { 00:24:24.914 "subsystem": "keyring", 00:24:24.914 "config": [ 00:24:24.914 { 00:24:24.914 "method": "keyring_file_add_key", 00:24:24.914 "params": { 00:24:24.914 "name": "key0", 00:24:24.914 "path": "/tmp/tmp.pEwW7BeB6i" 00:24:24.914 } 00:24:24.914 } 00:24:24.914 ] 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "subsystem": "iobuf", 00:24:24.914 "config": [ 00:24:24.914 { 00:24:24.914 "method": "iobuf_set_options", 00:24:24.914 "params": { 00:24:24.914 "small_pool_count": 8192, 00:24:24.914 "large_pool_count": 1024, 00:24:24.914 "small_bufsize": 8192, 00:24:24.914 "large_bufsize": 135168, 00:24:24.914 "enable_numa": false 00:24:24.914 } 00:24:24.914 } 00:24:24.914 ] 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "subsystem": "sock", 00:24:24.914 "config": [ 00:24:24.914 { 00:24:24.914 "method": "sock_set_default_impl", 00:24:24.914 "params": { 00:24:24.914 "impl_name": "posix" 00:24:24.914 } 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "method": "sock_impl_set_options", 00:24:24.914 "params": { 00:24:24.914 "impl_name": "ssl", 00:24:24.914 "recv_buf_size": 4096, 00:24:24.914 "send_buf_size": 4096, 00:24:24.914 "enable_recv_pipe": true, 00:24:24.914 "enable_quickack": false, 00:24:24.914 "enable_placement_id": 0, 00:24:24.914 "enable_zerocopy_send_server": true, 00:24:24.914 "enable_zerocopy_send_client": false, 00:24:24.914 "zerocopy_threshold": 0, 00:24:24.914 "tls_version": 0, 00:24:24.914 "enable_ktls": false 00:24:24.914 } 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "method": "sock_impl_set_options", 00:24:24.914 "params": { 00:24:24.914 "impl_name": "posix", 00:24:24.914 "recv_buf_size": 2097152, 00:24:24.914 "send_buf_size": 2097152, 00:24:24.914 "enable_recv_pipe": true, 00:24:24.914 "enable_quickack": false, 00:24:24.914 "enable_placement_id": 0, 00:24:24.914 "enable_zerocopy_send_server": true, 00:24:24.914 "enable_zerocopy_send_client": false, 00:24:24.914 "zerocopy_threshold": 0, 00:24:24.914 "tls_version": 0, 00:24:24.914 "enable_ktls": false 00:24:24.914 } 00:24:24.914 } 00:24:24.914 ] 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "subsystem": "vmd", 00:24:24.914 "config": [] 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "subsystem": "accel", 00:24:24.914 "config": [ 00:24:24.914 { 00:24:24.914 "method": "accel_set_options", 00:24:24.914 "params": { 00:24:24.914 "small_cache_size": 128, 00:24:24.914 "large_cache_size": 16, 00:24:24.914 "task_count": 2048, 00:24:24.914 "sequence_count": 2048, 00:24:24.914 "buf_count": 2048 00:24:24.914 } 00:24:24.914 } 00:24:24.914 ] 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "subsystem": "bdev", 00:24:24.914 "config": [ 00:24:24.914 { 00:24:24.914 "method": "bdev_set_options", 00:24:24.914 "params": { 00:24:24.914 "bdev_io_pool_size": 65535, 00:24:24.914 "bdev_io_cache_size": 256, 00:24:24.914 "bdev_auto_examine": true, 00:24:24.914 "iobuf_small_cache_size": 128, 00:24:24.914 "iobuf_large_cache_size": 16 00:24:24.914 } 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "method": "bdev_raid_set_options", 00:24:24.914 "params": { 00:24:24.914 "process_window_size_kb": 1024, 00:24:24.914 "process_max_bandwidth_mb_sec": 0 00:24:24.914 } 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "method": "bdev_iscsi_set_options", 00:24:24.914 "params": { 00:24:24.914 "timeout_sec": 30 00:24:24.914 } 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "method": "bdev_nvme_set_options", 00:24:24.914 "params": { 00:24:24.914 "action_on_timeout": "none", 00:24:24.914 "timeout_us": 0, 00:24:24.914 "timeout_admin_us": 0, 00:24:24.914 "keep_alive_timeout_ms": 10000, 00:24:24.914 "arbitration_burst": 0, 00:24:24.914 "low_priority_weight": 0, 00:24:24.914 "medium_priority_weight": 0, 00:24:24.914 "high_priority_weight": 0, 00:24:24.914 "nvme_adminq_poll_period_us": 10000, 00:24:24.914 "nvme_ioq_poll_period_us": 0, 00:24:24.914 "io_queue_requests": 0, 00:24:24.914 "delay_cmd_submit": true, 00:24:24.914 "transport_retry_count": 4, 00:24:24.914 "bdev_retry_count": 3, 00:24:24.914 "transport_ack_timeout": 0, 00:24:24.914 "ctrlr_loss_timeout_sec": 0, 00:24:24.914 "reconnect_delay_sec": 0, 00:24:24.914 "fast_io_fail_timeout_sec": 0, 00:24:24.914 "disable_auto_failback": false, 00:24:24.914 "generate_uuids": false, 00:24:24.914 "transport_tos": 0, 00:24:24.914 "nvme_error_stat": false, 00:24:24.914 "rdma_srq_size": 0, 00:24:24.914 "io_path_stat": false, 00:24:24.914 "allow_accel_sequence": false, 00:24:24.914 "rdma_max_cq_size": 0, 00:24:24.914 "rdma_cm_event_timeout_ms": 0, 00:24:24.914 "dhchap_digests": [ 00:24:24.914 "sha256", 00:24:24.914 "sha384", 00:24:24.914 "sha512" 00:24:24.914 ], 00:24:24.914 "dhchap_dhgroups": [ 00:24:24.914 "null", 00:24:24.914 "ffdhe2048", 00:24:24.914 "ffdhe3072", 00:24:24.914 "ffdhe4096", 00:24:24.914 "ffdhe6144", 00:24:24.914 "ffdhe8192" 00:24:24.914 ] 00:24:24.914 } 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "method": "bdev_nvme_set_hotplug", 00:24:24.914 "params": { 00:24:24.914 "period_us": 100000, 00:24:24.914 "enable": false 00:24:24.914 } 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "method": "bdev_malloc_create", 00:24:24.914 "params": { 00:24:24.914 "name": "malloc0", 00:24:24.914 "num_blocks": 8192, 00:24:24.914 "block_size": 4096, 00:24:24.914 "physical_block_size": 4096, 00:24:24.914 "uuid": "0e437b2f-b6d8-4bfe-bb2b-0fab5d841424", 00:24:24.914 "optimal_io_boundary": 0, 00:24:24.914 "md_size": 0, 00:24:24.914 "dif_type": 0, 00:24:24.914 "dif_is_head_of_md": false, 00:24:24.914 "dif_pi_format": 0 00:24:24.914 } 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "method": "bdev_wait_for_examine" 00:24:24.914 } 00:24:24.914 ] 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "subsystem": "nbd", 00:24:24.914 "config": [] 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "subsystem": "scheduler", 00:24:24.914 "config": [ 00:24:24.914 { 00:24:24.914 "method": "framework_set_scheduler", 00:24:24.914 "params": { 00:24:24.914 "name": "static" 00:24:24.914 } 00:24:24.914 } 00:24:24.914 ] 00:24:24.914 }, 00:24:24.914 { 00:24:24.914 "subsystem": "nvmf", 00:24:24.914 "config": [ 00:24:24.914 { 00:24:24.914 "method": "nvmf_set_config", 00:24:24.914 "params": { 00:24:24.914 "discovery_filter": "match_any", 00:24:24.914 "admin_cmd_passthru": { 00:24:24.914 "identify_ctrlr": false 00:24:24.914 }, 00:24:24.914 "dhchap_digests": [ 00:24:24.914 "sha256", 00:24:24.914 "sha384", 00:24:24.914 "sha512" 00:24:24.914 ], 00:24:24.914 "dhchap_dhgroups": [ 00:24:24.914 "null", 00:24:24.914 "ffdhe2048", 00:24:24.915 "ffdhe3072", 00:24:24.915 "ffdhe4096", 00:24:24.915 "ffdhe6144", 00:24:24.915 "ffdhe8192" 00:24:24.915 ] 00:24:24.915 } 00:24:24.915 }, 00:24:24.915 { 00:24:24.915 "method": "nvmf_set_max_subsystems", 00:24:24.915 "params": { 00:24:24.915 "max_subsystems": 1024 00:24:24.915 } 00:24:24.915 }, 00:24:24.915 { 00:24:24.915 "method": "nvmf_set_crdt", 00:24:24.915 "params": { 00:24:24.915 "crdt1": 0, 00:24:24.915 "crdt2": 0, 00:24:24.915 "crdt3": 0 00:24:24.915 } 00:24:24.915 }, 00:24:24.915 { 00:24:24.915 "method": "nvmf_create_transport", 00:24:24.915 "params": { 00:24:24.915 "trtype": "TCP", 00:24:24.915 "max_queue_depth": 128, 00:24:24.915 "max_io_qpairs_per_ctrlr": 127, 00:24:24.915 "in_capsule_data_size": 4096, 00:24:24.915 "max_io_size": 131072, 00:24:24.915 "io_unit_size": 131072, 00:24:24.915 "max_aq_depth": 128, 00:24:24.915 "num_shared_buffers": 511, 00:24:24.915 "buf_cache_size": 4294967295, 00:24:24.915 "dif_insert_or_strip": false, 00:24:24.915 "zcopy": false, 00:24:24.915 "c2h_success": false, 00:24:24.915 "sock_priority": 0, 00:24:24.915 "abort_timeout_sec": 1, 00:24:24.915 "ack_timeout": 0, 00:24:24.915 "data_wr_pool_size": 0 00:24:24.915 } 00:24:24.915 }, 00:24:24.915 { 00:24:24.915 "method": "nvmf_create_subsystem", 00:24:24.915 "params": { 00:24:24.915 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.915 "allow_any_host": false, 00:24:24.915 "serial_number": "SPDK00000000000001", 00:24:24.915 "model_number": "SPDK bdev Controller", 00:24:24.915 "max_namespaces": 10, 00:24:24.915 "min_cntlid": 1, 00:24:24.915 "max_cntlid": 65519, 00:24:24.915 "ana_reporting": false 00:24:24.915 } 00:24:24.915 }, 00:24:24.915 { 00:24:24.915 "method": "nvmf_subsystem_add_host", 00:24:24.915 "params": { 00:24:24.915 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.915 "host": "nqn.2016-06.io.spdk:host1", 00:24:24.915 "psk": "key0" 00:24:24.915 } 00:24:24.915 }, 00:24:24.915 { 00:24:24.915 "method": "nvmf_subsystem_add_ns", 00:24:24.915 "params": { 00:24:24.915 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.915 "namespace": { 00:24:24.915 "nsid": 1, 00:24:24.915 "bdev_name": "malloc0", 00:24:24.915 "nguid": "0E437B2FB6D84BFEBB2B0FAB5D841424", 00:24:24.915 "uuid": "0e437b2f-b6d8-4bfe-bb2b-0fab5d841424", 00:24:24.915 "no_auto_visible": false 00:24:24.915 } 00:24:24.915 } 00:24:24.915 }, 00:24:24.915 { 00:24:24.915 "method": "nvmf_subsystem_add_listener", 00:24:24.915 "params": { 00:24:24.915 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.915 "listen_address": { 00:24:24.915 "trtype": "TCP", 00:24:24.915 "adrfam": "IPv4", 00:24:24.915 "traddr": "10.0.0.2", 00:24:24.915 "trsvcid": "4420" 00:24:24.915 }, 00:24:24.915 "secure_channel": true 00:24:24.915 } 00:24:24.915 } 00:24:24.915 ] 00:24:24.915 } 00:24:24.915 ] 00:24:24.915 }' 00:24:24.915 20:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:25.174 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:25.174 "subsystems": [ 00:24:25.174 { 00:24:25.174 "subsystem": "keyring", 00:24:25.174 "config": [ 00:24:25.174 { 00:24:25.174 "method": "keyring_file_add_key", 00:24:25.174 "params": { 00:24:25.174 "name": "key0", 00:24:25.174 "path": "/tmp/tmp.pEwW7BeB6i" 00:24:25.174 } 00:24:25.174 } 00:24:25.174 ] 00:24:25.174 }, 00:24:25.174 { 00:24:25.174 "subsystem": "iobuf", 00:24:25.174 "config": [ 00:24:25.174 { 00:24:25.174 "method": "iobuf_set_options", 00:24:25.174 "params": { 00:24:25.174 "small_pool_count": 8192, 00:24:25.174 "large_pool_count": 1024, 00:24:25.174 "small_bufsize": 8192, 00:24:25.174 "large_bufsize": 135168, 00:24:25.174 "enable_numa": false 00:24:25.174 } 00:24:25.174 } 00:24:25.174 ] 00:24:25.174 }, 00:24:25.174 { 00:24:25.174 "subsystem": "sock", 00:24:25.174 "config": [ 00:24:25.174 { 00:24:25.174 "method": "sock_set_default_impl", 00:24:25.174 "params": { 00:24:25.174 "impl_name": "posix" 00:24:25.174 } 00:24:25.174 }, 00:24:25.174 { 00:24:25.174 "method": "sock_impl_set_options", 00:24:25.174 "params": { 00:24:25.174 "impl_name": "ssl", 00:24:25.174 "recv_buf_size": 4096, 00:24:25.174 "send_buf_size": 4096, 00:24:25.174 "enable_recv_pipe": true, 00:24:25.174 "enable_quickack": false, 00:24:25.174 "enable_placement_id": 0, 00:24:25.174 "enable_zerocopy_send_server": true, 00:24:25.174 "enable_zerocopy_send_client": false, 00:24:25.174 "zerocopy_threshold": 0, 00:24:25.174 "tls_version": 0, 00:24:25.174 "enable_ktls": false 00:24:25.174 } 00:24:25.174 }, 00:24:25.174 { 00:24:25.174 "method": "sock_impl_set_options", 00:24:25.174 "params": { 00:24:25.174 "impl_name": "posix", 00:24:25.174 "recv_buf_size": 2097152, 00:24:25.174 "send_buf_size": 2097152, 00:24:25.174 "enable_recv_pipe": true, 00:24:25.174 "enable_quickack": false, 00:24:25.174 "enable_placement_id": 0, 00:24:25.174 "enable_zerocopy_send_server": true, 00:24:25.174 "enable_zerocopy_send_client": false, 00:24:25.174 "zerocopy_threshold": 0, 00:24:25.174 "tls_version": 0, 00:24:25.174 "enable_ktls": false 00:24:25.174 } 00:24:25.174 } 00:24:25.174 ] 00:24:25.174 }, 00:24:25.174 { 00:24:25.174 "subsystem": "vmd", 00:24:25.174 "config": [] 00:24:25.174 }, 00:24:25.174 { 00:24:25.174 "subsystem": "accel", 00:24:25.174 "config": [ 00:24:25.174 { 00:24:25.174 "method": "accel_set_options", 00:24:25.174 "params": { 00:24:25.174 "small_cache_size": 128, 00:24:25.174 "large_cache_size": 16, 00:24:25.174 "task_count": 2048, 00:24:25.174 "sequence_count": 2048, 00:24:25.174 "buf_count": 2048 00:24:25.174 } 00:24:25.174 } 00:24:25.174 ] 00:24:25.174 }, 00:24:25.174 { 00:24:25.174 "subsystem": "bdev", 00:24:25.174 "config": [ 00:24:25.174 { 00:24:25.174 "method": "bdev_set_options", 00:24:25.174 "params": { 00:24:25.174 "bdev_io_pool_size": 65535, 00:24:25.174 "bdev_io_cache_size": 256, 00:24:25.174 "bdev_auto_examine": true, 00:24:25.174 "iobuf_small_cache_size": 128, 00:24:25.174 "iobuf_large_cache_size": 16 00:24:25.174 } 00:24:25.174 }, 00:24:25.174 { 00:24:25.174 "method": "bdev_raid_set_options", 00:24:25.174 "params": { 00:24:25.174 "process_window_size_kb": 1024, 00:24:25.174 "process_max_bandwidth_mb_sec": 0 00:24:25.174 } 00:24:25.174 }, 00:24:25.174 { 00:24:25.174 "method": "bdev_iscsi_set_options", 00:24:25.174 "params": { 00:24:25.174 "timeout_sec": 30 00:24:25.174 } 00:24:25.174 }, 00:24:25.174 { 00:24:25.174 "method": "bdev_nvme_set_options", 00:24:25.174 "params": { 00:24:25.174 "action_on_timeout": "none", 00:24:25.174 "timeout_us": 0, 00:24:25.174 "timeout_admin_us": 0, 00:24:25.174 "keep_alive_timeout_ms": 10000, 00:24:25.174 "arbitration_burst": 0, 00:24:25.174 "low_priority_weight": 0, 00:24:25.174 "medium_priority_weight": 0, 00:24:25.174 "high_priority_weight": 0, 00:24:25.174 "nvme_adminq_poll_period_us": 10000, 00:24:25.174 "nvme_ioq_poll_period_us": 0, 00:24:25.174 "io_queue_requests": 512, 00:24:25.174 "delay_cmd_submit": true, 00:24:25.174 "transport_retry_count": 4, 00:24:25.174 "bdev_retry_count": 3, 00:24:25.174 "transport_ack_timeout": 0, 00:24:25.174 "ctrlr_loss_timeout_sec": 0, 00:24:25.174 "reconnect_delay_sec": 0, 00:24:25.174 "fast_io_fail_timeout_sec": 0, 00:24:25.174 "disable_auto_failback": false, 00:24:25.174 "generate_uuids": false, 00:24:25.174 "transport_tos": 0, 00:24:25.174 "nvme_error_stat": false, 00:24:25.174 "rdma_srq_size": 0, 00:24:25.174 "io_path_stat": false, 00:24:25.174 "allow_accel_sequence": false, 00:24:25.174 "rdma_max_cq_size": 0, 00:24:25.174 "rdma_cm_event_timeout_ms": 0, 00:24:25.174 "dhchap_digests": [ 00:24:25.174 "sha256", 00:24:25.174 "sha384", 00:24:25.174 "sha512" 00:24:25.174 ], 00:24:25.174 "dhchap_dhgroups": [ 00:24:25.174 "null", 00:24:25.174 "ffdhe2048", 00:24:25.174 "ffdhe3072", 00:24:25.174 "ffdhe4096", 00:24:25.174 "ffdhe6144", 00:24:25.174 "ffdhe8192" 00:24:25.174 ] 00:24:25.174 } 00:24:25.174 }, 00:24:25.174 { 00:24:25.174 "method": "bdev_nvme_attach_controller", 00:24:25.174 "params": { 00:24:25.174 "name": "TLSTEST", 00:24:25.174 "trtype": "TCP", 00:24:25.174 "adrfam": "IPv4", 00:24:25.174 "traddr": "10.0.0.2", 00:24:25.174 "trsvcid": "4420", 00:24:25.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.174 "prchk_reftag": false, 00:24:25.174 "prchk_guard": false, 00:24:25.174 "ctrlr_loss_timeout_sec": 0, 00:24:25.174 "reconnect_delay_sec": 0, 00:24:25.174 "fast_io_fail_timeout_sec": 0, 00:24:25.174 "psk": "key0", 00:24:25.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:25.174 "hdgst": false, 00:24:25.174 "ddgst": false, 00:24:25.174 "multipath": "multipath" 00:24:25.174 } 00:24:25.174 }, 00:24:25.174 { 00:24:25.174 "method": "bdev_nvme_set_hotplug", 00:24:25.174 "params": { 00:24:25.174 "period_us": 100000, 00:24:25.174 "enable": false 00:24:25.174 } 00:24:25.174 }, 00:24:25.174 { 00:24:25.174 "method": "bdev_wait_for_examine" 00:24:25.174 } 00:24:25.174 ] 00:24:25.174 }, 00:24:25.174 { 00:24:25.175 "subsystem": "nbd", 00:24:25.175 "config": [] 00:24:25.175 } 00:24:25.175 ] 00:24:25.175 }' 00:24:25.175 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 278467 00:24:25.175 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278467 ']' 00:24:25.175 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278467 00:24:25.175 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:25.175 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.175 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278467 00:24:25.175 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:25.175 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:25.175 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278467' 00:24:25.175 killing process with pid 278467 00:24:25.175 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278467 00:24:25.175 Received shutdown signal, test time was about 10.000000 seconds 00:24:25.175 00:24:25.175 Latency(us) 00:24:25.175 [2024-11-18T19:25:37.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.175 [2024-11-18T19:25:37.183Z] =================================================================================================================== 00:24:25.175 [2024-11-18T19:25:37.183Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:25.175 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278467 00:24:25.432 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 278234 00:24:25.432 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278234 ']' 00:24:25.432 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278234 00:24:25.432 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:25.432 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.432 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278234 00:24:25.432 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:25.432 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:25.432 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278234' 00:24:25.432 killing process with pid 278234 00:24:25.432 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278234 00:24:25.432 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278234 00:24:25.693 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:25.693 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:25.693 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:25.693 "subsystems": [ 00:24:25.693 { 00:24:25.693 "subsystem": "keyring", 00:24:25.693 "config": [ 00:24:25.693 { 00:24:25.693 "method": "keyring_file_add_key", 00:24:25.693 "params": { 00:24:25.693 "name": "key0", 00:24:25.693 "path": "/tmp/tmp.pEwW7BeB6i" 00:24:25.693 } 00:24:25.693 } 00:24:25.693 ] 00:24:25.693 }, 00:24:25.693 { 00:24:25.693 "subsystem": "iobuf", 00:24:25.693 "config": [ 00:24:25.693 { 00:24:25.693 "method": "iobuf_set_options", 00:24:25.693 "params": { 00:24:25.693 "small_pool_count": 8192, 00:24:25.693 "large_pool_count": 1024, 00:24:25.693 "small_bufsize": 8192, 00:24:25.693 "large_bufsize": 135168, 00:24:25.693 "enable_numa": false 00:24:25.693 } 00:24:25.693 } 00:24:25.693 ] 00:24:25.693 }, 00:24:25.693 { 00:24:25.693 "subsystem": "sock", 00:24:25.693 "config": [ 00:24:25.693 { 00:24:25.693 "method": "sock_set_default_impl", 00:24:25.693 "params": { 00:24:25.693 "impl_name": "posix" 00:24:25.693 } 00:24:25.693 }, 00:24:25.693 { 00:24:25.693 "method": "sock_impl_set_options", 00:24:25.693 "params": { 00:24:25.693 "impl_name": "ssl", 00:24:25.693 "recv_buf_size": 4096, 00:24:25.693 "send_buf_size": 4096, 00:24:25.693 "enable_recv_pipe": true, 00:24:25.693 "enable_quickack": false, 00:24:25.693 "enable_placement_id": 0, 00:24:25.693 "enable_zerocopy_send_server": true, 00:24:25.693 "enable_zerocopy_send_client": false, 00:24:25.693 "zerocopy_threshold": 0, 00:24:25.693 "tls_version": 0, 00:24:25.693 "enable_ktls": false 00:24:25.693 } 00:24:25.693 }, 00:24:25.693 { 00:24:25.693 "method": "sock_impl_set_options", 00:24:25.693 "params": { 00:24:25.693 "impl_name": "posix", 00:24:25.693 "recv_buf_size": 2097152, 00:24:25.693 "send_buf_size": 2097152, 00:24:25.693 "enable_recv_pipe": true, 00:24:25.693 "enable_quickack": false, 00:24:25.693 "enable_placement_id": 0, 00:24:25.693 "enable_zerocopy_send_server": true, 00:24:25.693 "enable_zerocopy_send_client": false, 00:24:25.693 "zerocopy_threshold": 0, 00:24:25.693 "tls_version": 0, 00:24:25.693 "enable_ktls": false 00:24:25.693 } 00:24:25.693 } 00:24:25.693 ] 00:24:25.693 }, 00:24:25.693 { 00:24:25.693 "subsystem": "vmd", 00:24:25.693 "config": [] 00:24:25.693 }, 00:24:25.693 { 00:24:25.693 "subsystem": "accel", 00:24:25.693 "config": [ 00:24:25.693 { 00:24:25.693 "method": "accel_set_options", 00:24:25.693 "params": { 00:24:25.693 "small_cache_size": 128, 00:24:25.693 "large_cache_size": 16, 00:24:25.693 "task_count": 2048, 00:24:25.693 "sequence_count": 2048, 00:24:25.693 "buf_count": 2048 00:24:25.693 } 00:24:25.693 } 00:24:25.693 ] 00:24:25.693 }, 00:24:25.693 { 00:24:25.693 "subsystem": "bdev", 00:24:25.693 "config": [ 00:24:25.693 { 00:24:25.693 "method": "bdev_set_options", 00:24:25.693 "params": { 00:24:25.693 "bdev_io_pool_size": 65535, 00:24:25.693 "bdev_io_cache_size": 256, 00:24:25.693 "bdev_auto_examine": true, 00:24:25.693 "iobuf_small_cache_size": 128, 00:24:25.694 "iobuf_large_cache_size": 16 00:24:25.694 } 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "method": "bdev_raid_set_options", 00:24:25.694 "params": { 00:24:25.694 "process_window_size_kb": 1024, 00:24:25.694 "process_max_bandwidth_mb_sec": 0 00:24:25.694 } 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "method": "bdev_iscsi_set_options", 00:24:25.694 "params": { 00:24:25.694 "timeout_sec": 30 00:24:25.694 } 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "method": "bdev_nvme_set_options", 00:24:25.694 "params": { 00:24:25.694 "action_on_timeout": "none", 00:24:25.694 "timeout_us": 0, 00:24:25.694 "timeout_admin_us": 0, 00:24:25.694 "keep_alive_timeout_ms": 10000, 00:24:25.694 "arbitration_burst": 0, 00:24:25.694 "low_priority_weight": 0, 00:24:25.694 "medium_priority_weight": 0, 00:24:25.694 "high_priority_weight": 0, 00:24:25.694 "nvme_adminq_poll_period_us": 10000, 00:24:25.694 "nvme_ioq_poll_period_us": 0, 00:24:25.694 "io_queue_requests": 0, 00:24:25.694 "delay_cmd_submit": true, 00:24:25.694 "transport_retry_count": 4, 00:24:25.694 "bdev_retry_count": 3, 00:24:25.694 "transport_ack_timeout": 0, 00:24:25.694 "ctrlr_loss_timeout_sec": 0, 00:24:25.694 "reconnect_delay_sec": 0, 00:24:25.694 "fast_io_fail_timeout_sec": 0, 00:24:25.694 "disable_auto_failback": false, 00:24:25.694 "generate_uuids": false, 00:24:25.694 "transport_tos": 0, 00:24:25.694 "nvme_error_stat": false, 00:24:25.694 "rdma_srq_size": 0, 00:24:25.694 "io_path_stat": false, 00:24:25.694 "allow_accel_sequence": false, 00:24:25.694 "rdma_max_cq_size": 0, 00:24:25.694 "rdma_cm_event_timeout_ms": 0, 00:24:25.694 "dhchap_digests": [ 00:24:25.694 "sha256", 00:24:25.694 "sha384", 00:24:25.694 "sha512" 00:24:25.694 ], 00:24:25.694 "dhchap_dhgroups": [ 00:24:25.694 "null", 00:24:25.694 "ffdhe2048", 00:24:25.694 "ffdhe3072", 00:24:25.694 "ffdhe4096", 00:24:25.694 "ffdhe6144", 00:24:25.694 "ffdhe8192" 00:24:25.694 ] 00:24:25.694 } 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "method": "bdev_nvme_set_hotplug", 00:24:25.694 "params": { 00:24:25.694 "period_us": 100000, 00:24:25.694 "enable": false 00:24:25.694 } 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "method": "bdev_malloc_create", 00:24:25.694 "params": { 00:24:25.694 "name": "malloc0", 00:24:25.694 "num_blocks": 8192, 00:24:25.694 "block_size": 4096, 00:24:25.694 "physical_block_size": 4096, 00:24:25.694 "uuid": "0e437b2f-b6d8-4bfe-bb2b-0fab5d841424", 00:24:25.694 "optimal_io_boundary": 0, 00:24:25.694 "md_size": 0, 00:24:25.694 "dif_type": 0, 00:24:25.694 "dif_is_head_of_md": false, 00:24:25.694 "dif_pi_format": 0 00:24:25.694 } 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "method": "bdev_wait_for_examine" 00:24:25.694 } 00:24:25.694 ] 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "subsystem": "nbd", 00:24:25.694 "config": [] 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "subsystem": "scheduler", 00:24:25.694 "config": [ 00:24:25.694 { 00:24:25.694 "method": "framework_set_scheduler", 00:24:25.694 "params": { 00:24:25.694 "name": "static" 00:24:25.694 } 00:24:25.694 } 00:24:25.694 ] 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "subsystem": "nvmf", 00:24:25.694 "config": [ 00:24:25.694 { 00:24:25.694 "method": "nvmf_set_config", 00:24:25.694 "params": { 00:24:25.694 "discovery_filter": "match_any", 00:24:25.694 "admin_cmd_passthru": { 00:24:25.694 "identify_ctrlr": false 00:24:25.694 }, 00:24:25.694 "dhchap_digests": [ 00:24:25.694 "sha256", 00:24:25.694 "sha384", 00:24:25.694 "sha512" 00:24:25.694 ], 00:24:25.694 "dhchap_dhgroups": [ 00:24:25.694 "null", 00:24:25.694 "ffdhe2048", 00:24:25.694 "ffdhe3072", 00:24:25.694 "ffdhe4096", 00:24:25.694 "ffdhe6144", 00:24:25.694 "ffdhe8192" 00:24:25.694 ] 00:24:25.694 } 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "method": "nvmf_set_max_subsystems", 00:24:25.694 "params": { 00:24:25.694 "max_subsystems": 1024 00:24:25.694 } 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "method": "nvmf_set_crdt", 00:24:25.694 "params": { 00:24:25.694 "crdt1": 0, 00:24:25.694 "crdt2": 0, 00:24:25.694 "crdt3": 0 00:24:25.694 } 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "method": "nvmf_create_transport", 00:24:25.694 "params": { 00:24:25.694 "trtype": "TCP", 00:24:25.694 "max_queue_depth": 128, 00:24:25.694 "max_io_qpairs_per_ctrlr": 127, 00:24:25.694 "in_capsule_data_size": 4096, 00:24:25.694 "max_io_size": 131072, 00:24:25.694 "io_unit_size": 131072, 00:24:25.694 "max_aq_depth": 128, 00:24:25.694 "num_shared_buffers": 511, 00:24:25.694 "buf_cache_size": 4294967295, 00:24:25.694 "dif_insert_or_strip": false, 00:24:25.694 "zcopy": false, 00:24:25.694 "c2h_success": false, 00:24:25.694 "sock_priority": 0, 00:24:25.694 "abort_timeout_sec": 1, 00:24:25.694 "ack_timeout": 0, 00:24:25.694 "data_wr_pool_size": 0 00:24:25.694 } 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "method": "nvmf_create_subsystem", 00:24:25.694 "params": { 00:24:25.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.694 "allow_any_host": false, 00:24:25.694 "serial_number": "SPDK00000000000001", 00:24:25.694 "model_number": "SPDK bdev Controller", 00:24:25.694 "max_namespaces": 10, 00:24:25.694 "min_cntlid": 1, 00:24:25.694 "max_cntlid": 65519, 00:24:25.694 "ana_reporting": false 00:24:25.694 } 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "method": "nvmf_subsystem_add_host", 00:24:25.694 "params": { 00:24:25.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.694 "host": "nqn.2016-06.io.spdk:host1", 00:24:25.694 "psk": "key0" 00:24:25.694 } 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "method": "nvmf_subsystem_add_ns", 00:24:25.694 "params": { 00:24:25.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.694 "namespace": { 00:24:25.694 "nsid": 1, 00:24:25.694 "bdev_name": "malloc0", 00:24:25.694 "nguid": "0E437B2FB6D84BFEBB2B0FAB5D841424", 00:24:25.694 "uuid": "0e437b2f-b6d8-4bfe-bb2b-0fab5d841424", 00:24:25.694 "no_auto_visible": false 00:24:25.694 } 00:24:25.694 } 00:24:25.694 }, 00:24:25.694 { 00:24:25.694 "method": "nvmf_subsystem_add_listener", 00:24:25.694 "params": { 00:24:25.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.694 "listen_address": { 00:24:25.694 "trtype": "TCP", 00:24:25.694 "adrfam": "IPv4", 00:24:25.694 "traddr": "10.0.0.2", 00:24:25.694 "trsvcid": "4420" 00:24:25.694 }, 00:24:25.694 "secure_channel": true 00:24:25.694 } 00:24:25.694 } 00:24:25.694 ] 00:24:25.694 } 00:24:25.694 ] 00:24:25.694 }' 00:24:25.694 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:25.694 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.694 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=278687 00:24:25.694 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:25.694 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 278687 00:24:25.694 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278687 ']' 00:24:25.694 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.694 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.695 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.695 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.695 20:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.695 [2024-11-18 20:25:37.616717] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:25.695 [2024-11-18 20:25:37.616793] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.695 [2024-11-18 20:25:37.687962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.955 [2024-11-18 20:25:37.735017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.955 [2024-11-18 20:25:37.735069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.955 [2024-11-18 20:25:37.735083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.955 [2024-11-18 20:25:37.735094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.955 [2024-11-18 20:25:37.735104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.955 [2024-11-18 20:25:37.735721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.216 [2024-11-18 20:25:37.976343] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.216 [2024-11-18 20:25:38.008373] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:26.216 [2024-11-18 20:25:38.008647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.786 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.786 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:26.786 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:26.786 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:26.786 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.786 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.786 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=278835 00:24:26.786 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 278835 /var/tmp/bdevperf.sock 00:24:26.786 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278835 ']' 00:24:26.786 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:26.786 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:26.786 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.786 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:26.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:26.786 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:26.786 "subsystems": [ 00:24:26.786 { 00:24:26.786 "subsystem": "keyring", 00:24:26.786 "config": [ 00:24:26.786 { 00:24:26.786 "method": "keyring_file_add_key", 00:24:26.786 "params": { 00:24:26.786 "name": "key0", 00:24:26.786 "path": "/tmp/tmp.pEwW7BeB6i" 00:24:26.786 } 00:24:26.786 } 00:24:26.786 ] 00:24:26.786 }, 00:24:26.786 { 00:24:26.786 "subsystem": "iobuf", 00:24:26.786 "config": [ 00:24:26.786 { 00:24:26.786 "method": "iobuf_set_options", 00:24:26.786 "params": { 00:24:26.786 "small_pool_count": 8192, 00:24:26.786 "large_pool_count": 1024, 00:24:26.786 "small_bufsize": 8192, 00:24:26.786 "large_bufsize": 135168, 00:24:26.786 "enable_numa": false 00:24:26.786 } 00:24:26.786 } 00:24:26.786 ] 00:24:26.786 }, 00:24:26.786 { 00:24:26.786 "subsystem": "sock", 00:24:26.786 "config": [ 00:24:26.786 { 00:24:26.786 "method": "sock_set_default_impl", 00:24:26.786 "params": { 00:24:26.786 "impl_name": "posix" 00:24:26.786 } 00:24:26.786 }, 00:24:26.786 { 00:24:26.786 "method": "sock_impl_set_options", 00:24:26.786 "params": { 00:24:26.786 "impl_name": "ssl", 00:24:26.786 "recv_buf_size": 4096, 00:24:26.786 "send_buf_size": 4096, 00:24:26.786 "enable_recv_pipe": true, 00:24:26.786 "enable_quickack": false, 00:24:26.786 "enable_placement_id": 0, 00:24:26.786 "enable_zerocopy_send_server": true, 00:24:26.786 "enable_zerocopy_send_client": false, 00:24:26.786 "zerocopy_threshold": 0, 00:24:26.786 "tls_version": 0, 00:24:26.786 "enable_ktls": false 00:24:26.786 } 00:24:26.786 }, 00:24:26.786 { 00:24:26.786 "method": "sock_impl_set_options", 00:24:26.786 "params": { 00:24:26.786 "impl_name": "posix", 00:24:26.786 "recv_buf_size": 2097152, 00:24:26.786 "send_buf_size": 2097152, 00:24:26.786 "enable_recv_pipe": true, 00:24:26.786 "enable_quickack": false, 00:24:26.786 "enable_placement_id": 0, 00:24:26.786 "enable_zerocopy_send_server": true, 00:24:26.786 "enable_zerocopy_send_client": false, 00:24:26.786 "zerocopy_threshold": 0, 00:24:26.786 "tls_version": 0, 00:24:26.786 "enable_ktls": false 00:24:26.786 } 00:24:26.786 } 00:24:26.786 ] 00:24:26.786 }, 00:24:26.786 { 00:24:26.786 "subsystem": "vmd", 00:24:26.786 "config": [] 00:24:26.786 }, 00:24:26.786 { 00:24:26.786 "subsystem": "accel", 00:24:26.786 "config": [ 00:24:26.786 { 00:24:26.786 "method": "accel_set_options", 00:24:26.786 "params": { 00:24:26.786 "small_cache_size": 128, 00:24:26.786 "large_cache_size": 16, 00:24:26.786 "task_count": 2048, 00:24:26.786 "sequence_count": 2048, 00:24:26.786 "buf_count": 2048 00:24:26.786 } 00:24:26.786 } 00:24:26.786 ] 00:24:26.786 }, 00:24:26.786 { 00:24:26.786 "subsystem": "bdev", 00:24:26.786 "config": [ 00:24:26.786 { 00:24:26.786 "method": "bdev_set_options", 00:24:26.786 "params": { 00:24:26.786 "bdev_io_pool_size": 65535, 00:24:26.786 "bdev_io_cache_size": 256, 00:24:26.786 "bdev_auto_examine": true, 00:24:26.786 "iobuf_small_cache_size": 128, 00:24:26.786 "iobuf_large_cache_size": 16 00:24:26.786 } 00:24:26.786 }, 00:24:26.786 { 00:24:26.786 "method": "bdev_raid_set_options", 00:24:26.786 "params": { 00:24:26.786 "process_window_size_kb": 1024, 00:24:26.786 "process_max_bandwidth_mb_sec": 0 00:24:26.786 } 00:24:26.786 }, 00:24:26.786 { 00:24:26.786 "method": "bdev_iscsi_set_options", 00:24:26.786 "params": { 00:24:26.786 "timeout_sec": 30 00:24:26.786 } 00:24:26.786 }, 00:24:26.786 { 00:24:26.786 "method": "bdev_nvme_set_options", 00:24:26.786 "params": { 00:24:26.786 "action_on_timeout": "none", 00:24:26.786 "timeout_us": 0, 00:24:26.786 "timeout_admin_us": 0, 00:24:26.786 "keep_alive_timeout_ms": 10000, 00:24:26.786 "arbitration_burst": 0, 00:24:26.786 "low_priority_weight": 0, 00:24:26.786 "medium_priority_weight": 0, 00:24:26.786 "high_priority_weight": 0, 00:24:26.786 "nvme_adminq_poll_period_us": 10000, 00:24:26.786 "nvme_ioq_poll_period_us": 0, 00:24:26.786 "io_queue_requests": 512, 00:24:26.786 "delay_cmd_submit": true, 00:24:26.786 "transport_retry_count": 4, 00:24:26.786 "bdev_retry_count": 3, 00:24:26.786 "transport_ack_timeout": 0, 00:24:26.786 "ctrlr_loss_timeout_sec": 0, 00:24:26.786 "reconnect_delay_sec": 0, 00:24:26.786 "fast_io_fail_timeout_sec": 0, 00:24:26.786 "disable_auto_failback": false, 00:24:26.786 "generate_uuids": false, 00:24:26.786 "transport_tos": 0, 00:24:26.786 "nvme_error_stat": false, 00:24:26.786 "rdma_srq_size": 0, 00:24:26.786 "io_path_stat": false, 00:24:26.786 "allow_accel_sequence": false, 00:24:26.786 "rdma_max_cq_size": 0, 00:24:26.786 "rdma_cm_event_timeout_ms": 0, 00:24:26.786 "dhchap_digests": [ 00:24:26.786 "sha256", 00:24:26.786 "sha384", 00:24:26.786 "sha512" 00:24:26.786 ], 00:24:26.786 "dhchap_dhgroups": [ 00:24:26.786 "null", 00:24:26.786 "ffdhe2048", 00:24:26.786 "ffdhe3072", 00:24:26.786 "ffdhe4096", 00:24:26.786 "ffdhe6144", 00:24:26.786 "ffdhe8192" 00:24:26.786 ] 00:24:26.786 } 00:24:26.786 }, 00:24:26.786 { 00:24:26.786 "method": "bdev_nvme_attach_controller", 00:24:26.786 "params": { 00:24:26.786 "name": "TLSTEST", 00:24:26.786 "trtype": "TCP", 00:24:26.786 "adrfam": "IPv4", 00:24:26.786 "traddr": "10.0.0.2", 00:24:26.786 "trsvcid": "4420", 00:24:26.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.786 "prchk_reftag": false, 00:24:26.787 "prchk_guard": false, 00:24:26.787 "ctrlr_loss_timeout_sec": 0, 00:24:26.787 "reconnect_delay_sec": 0, 00:24:26.787 "fast_io_fail_timeout_sec": 0, 00:24:26.787 "psk": "key0", 00:24:26.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:26.787 "hdgst": false, 00:24:26.787 "ddgst": false, 00:24:26.787 "multipath": "multipath" 00:24:26.787 } 00:24:26.787 }, 00:24:26.787 { 00:24:26.787 "method": "bdev_nvme_set_hotplug", 00:24:26.787 "params": { 00:24:26.787 "period_us": 100000, 00:24:26.787 "enable": false 00:24:26.787 } 00:24:26.787 }, 00:24:26.787 { 00:24:26.787 "method": "bdev_wait_for_examine" 00:24:26.787 } 00:24:26.787 ] 00:24:26.787 }, 00:24:26.787 { 00:24:26.787 "subsystem": "nbd", 00:24:26.787 "config": [] 00:24:26.787 } 00:24:26.787 ] 00:24:26.787 }' 00:24:26.787 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.787 20:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.787 [2024-11-18 20:25:38.685521] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:26.787 [2024-11-18 20:25:38.685614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278835 ] 00:24:26.787 [2024-11-18 20:25:38.752132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.054 [2024-11-18 20:25:38.799544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.054 [2024-11-18 20:25:38.974311] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:27.362 20:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.362 20:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:27.362 20:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:27.362 Running I/O for 10 seconds... 00:24:29.333 3161.00 IOPS, 12.35 MiB/s [2024-11-18T19:25:42.286Z] 3201.00 IOPS, 12.50 MiB/s [2024-11-18T19:25:43.665Z] 3239.00 IOPS, 12.65 MiB/s [2024-11-18T19:25:44.236Z] 3231.75 IOPS, 12.62 MiB/s [2024-11-18T19:25:45.618Z] 3244.00 IOPS, 12.67 MiB/s [2024-11-18T19:25:46.555Z] 3259.17 IOPS, 12.73 MiB/s [2024-11-18T19:25:47.497Z] 3262.43 IOPS, 12.74 MiB/s [2024-11-18T19:25:48.434Z] 3264.50 IOPS, 12.75 MiB/s [2024-11-18T19:25:49.375Z] 3267.67 IOPS, 12.76 MiB/s [2024-11-18T19:25:49.375Z] 3274.80 IOPS, 12.79 MiB/s 00:24:37.367 Latency(us) 00:24:37.367 [2024-11-18T19:25:49.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.367 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:37.367 Verification LBA range: start 0x0 length 0x2000 00:24:37.367 TLSTESTn1 : 10.02 3281.38 12.82 0.00 0.00 38943.11 7039.05 41554.68 00:24:37.367 [2024-11-18T19:25:49.375Z] =================================================================================================================== 00:24:37.367 [2024-11-18T19:25:49.375Z] Total : 3281.38 12.82 0.00 0.00 38943.11 7039.05 41554.68 00:24:37.367 { 00:24:37.367 "results": [ 00:24:37.367 { 00:24:37.367 "job": "TLSTESTn1", 00:24:37.367 "core_mask": "0x4", 00:24:37.367 "workload": "verify", 00:24:37.367 "status": "finished", 00:24:37.367 "verify_range": { 00:24:37.367 "start": 0, 00:24:37.367 "length": 8192 00:24:37.367 }, 00:24:37.367 "queue_depth": 128, 00:24:37.367 "io_size": 4096, 00:24:37.367 "runtime": 10.018336, 00:24:37.367 "iops": 3281.3832556624175, 00:24:37.367 "mibps": 12.817903342431318, 00:24:37.367 "io_failed": 0, 00:24:37.367 "io_timeout": 0, 00:24:37.367 "avg_latency_us": 38943.11398015769, 00:24:37.367 "min_latency_us": 7039.051851851852, 00:24:37.367 "max_latency_us": 41554.67851851852 00:24:37.367 } 00:24:37.367 ], 00:24:37.367 "core_count": 1 00:24:37.367 } 00:24:37.367 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:37.367 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 278835 00:24:37.367 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278835 ']' 00:24:37.367 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278835 00:24:37.367 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:37.367 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.367 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278835 00:24:37.367 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:37.367 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:37.367 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278835' 00:24:37.367 killing process with pid 278835 00:24:37.367 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278835 00:24:37.367 Received shutdown signal, test time was about 10.000000 seconds 00:24:37.367 00:24:37.367 Latency(us) 00:24:37.367 [2024-11-18T19:25:49.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.367 [2024-11-18T19:25:49.375Z] =================================================================================================================== 00:24:37.367 [2024-11-18T19:25:49.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.367 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278835 00:24:37.626 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 278687 00:24:37.626 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278687 ']' 00:24:37.626 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278687 00:24:37.626 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:37.626 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.626 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278687 00:24:37.626 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:37.626 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:37.626 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278687' 00:24:37.626 killing process with pid 278687 00:24:37.626 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278687 00:24:37.626 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278687 00:24:37.884 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:37.884 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:37.884 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:37.885 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.885 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=280169 00:24:37.885 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:37.885 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 280169 00:24:37.885 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280169 ']' 00:24:37.885 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.885 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.885 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.885 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.885 20:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.885 [2024-11-18 20:25:49.815895] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:37.885 [2024-11-18 20:25:49.816008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.885 [2024-11-18 20:25:49.886627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.143 [2024-11-18 20:25:49.930806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.143 [2024-11-18 20:25:49.930853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.143 [2024-11-18 20:25:49.930876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.143 [2024-11-18 20:25:49.930888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.143 [2024-11-18 20:25:49.930898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.143 [2024-11-18 20:25:49.931459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.143 20:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.143 20:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:38.143 20:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.143 20:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:38.143 20:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.143 20:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.143 20:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.pEwW7BeB6i 00:24:38.143 20:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pEwW7BeB6i 00:24:38.143 20:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:38.401 [2024-11-18 20:25:50.324015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.401 20:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:38.661 20:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:38.922 [2024-11-18 20:25:50.877505] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:38.922 [2024-11-18 20:25:50.877786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.922 20:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:39.182 malloc0 00:24:39.182 20:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:39.750 20:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pEwW7BeB6i 00:24:40.009 20:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:40.267 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=280459 00:24:40.267 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:40.267 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 280459 /var/tmp/bdevperf.sock 00:24:40.267 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:40.267 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280459 ']' 00:24:40.267 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:40.267 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.267 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:40.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:40.267 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.267 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.267 [2024-11-18 20:25:52.129982] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:40.267 [2024-11-18 20:25:52.130073] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280459 ] 00:24:40.267 [2024-11-18 20:25:52.196905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.267 [2024-11-18 20:25:52.243299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.525 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.525 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:40.525 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pEwW7BeB6i 00:24:40.782 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:41.040 [2024-11-18 20:25:52.872366] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:41.040 nvme0n1 00:24:41.040 20:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:41.299 Running I/O for 1 seconds... 00:24:42.261 3383.00 IOPS, 13.21 MiB/s 00:24:42.261 Latency(us) 00:24:42.261 [2024-11-18T19:25:54.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.261 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:42.261 Verification LBA range: start 0x0 length 0x2000 00:24:42.261 nvme0n1 : 1.04 3374.85 13.18 0.00 0.00 37285.94 11796.48 38641.97 00:24:42.261 [2024-11-18T19:25:54.269Z] =================================================================================================================== 00:24:42.261 [2024-11-18T19:25:54.269Z] Total : 3374.85 13.18 0.00 0.00 37285.94 11796.48 38641.97 00:24:42.261 { 00:24:42.261 "results": [ 00:24:42.261 { 00:24:42.261 "job": "nvme0n1", 00:24:42.261 "core_mask": "0x2", 00:24:42.261 "workload": "verify", 00:24:42.261 "status": "finished", 00:24:42.261 "verify_range": { 00:24:42.261 "start": 0, 00:24:42.261 "length": 8192 00:24:42.261 }, 00:24:42.261 "queue_depth": 128, 00:24:42.261 "io_size": 4096, 00:24:42.261 "runtime": 1.040343, 00:24:42.261 "iops": 3374.848487469998, 00:24:42.261 "mibps": 13.18300190417968, 00:24:42.261 "io_failed": 0, 00:24:42.261 "io_timeout": 0, 00:24:42.261 "avg_latency_us": 37285.93764612804, 00:24:42.261 "min_latency_us": 11796.48, 00:24:42.262 "max_latency_us": 38641.96740740741 00:24:42.262 } 00:24:42.262 ], 00:24:42.262 "core_count": 1 00:24:42.262 } 00:24:42.262 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 280459 00:24:42.262 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280459 ']' 00:24:42.262 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280459 00:24:42.262 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:42.262 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.262 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280459 00:24:42.262 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:42.262 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:42.262 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280459' 00:24:42.262 killing process with pid 280459 00:24:42.262 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280459 00:24:42.262 Received shutdown signal, test time was about 1.000000 seconds 00:24:42.262 00:24:42.262 Latency(us) 00:24:42.262 [2024-11-18T19:25:54.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.262 [2024-11-18T19:25:54.270Z] =================================================================================================================== 00:24:42.262 [2024-11-18T19:25:54.270Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.262 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280459 00:24:42.522 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 280169 00:24:42.522 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280169 ']' 00:24:42.522 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280169 00:24:42.522 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:42.522 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.522 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280169 00:24:42.522 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:42.522 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:42.522 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280169' 00:24:42.522 killing process with pid 280169 00:24:42.522 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280169 00:24:42.522 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280169 00:24:42.781 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:42.781 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:42.781 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.781 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.781 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=280739 00:24:42.781 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:42.781 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 280739 00:24:42.781 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280739 ']' 00:24:42.781 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.781 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.781 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.781 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.781 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.781 [2024-11-18 20:25:54.666788] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:42.781 [2024-11-18 20:25:54.666903] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.781 [2024-11-18 20:25:54.737928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.781 [2024-11-18 20:25:54.779650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.781 [2024-11-18 20:25:54.779708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.781 [2024-11-18 20:25:54.779732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.781 [2024-11-18 20:25:54.779743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.781 [2024-11-18 20:25:54.779753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.781 [2024-11-18 20:25:54.780316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.040 [2024-11-18 20:25:54.922170] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.040 malloc0 00:24:43.040 [2024-11-18 20:25:54.953173] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.040 [2024-11-18 20:25:54.953427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=280876 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 280876 /var/tmp/bdevperf.sock 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280876 ']' 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.040 20:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.040 [2024-11-18 20:25:55.023217] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:43.040 [2024-11-18 20:25:55.023276] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280876 ] 00:24:43.298 [2024-11-18 20:25:55.088764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.298 [2024-11-18 20:25:55.133507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.298 20:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.298 20:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:43.298 20:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pEwW7BeB6i 00:24:43.557 20:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:43.815 [2024-11-18 20:25:55.785838] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:44.072 nvme0n1 00:24:44.072 20:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:44.072 Running I/O for 1 seconds... 00:24:45.012 3063.00 IOPS, 11.96 MiB/s 00:24:45.012 Latency(us) 00:24:45.012 [2024-11-18T19:25:57.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.012 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:45.012 Verification LBA range: start 0x0 length 0x2000 00:24:45.012 nvme0n1 : 1.03 3110.64 12.15 0.00 0.00 40628.97 6262.33 48739.37 00:24:45.012 [2024-11-18T19:25:57.020Z] =================================================================================================================== 00:24:45.012 [2024-11-18T19:25:57.020Z] Total : 3110.64 12.15 0.00 0.00 40628.97 6262.33 48739.37 00:24:45.012 { 00:24:45.012 "results": [ 00:24:45.012 { 00:24:45.012 "job": "nvme0n1", 00:24:45.012 "core_mask": "0x2", 00:24:45.012 "workload": "verify", 00:24:45.012 "status": "finished", 00:24:45.012 "verify_range": { 00:24:45.012 "start": 0, 00:24:45.012 "length": 8192 00:24:45.012 }, 00:24:45.012 "queue_depth": 128, 00:24:45.012 "io_size": 4096, 00:24:45.012 "runtime": 1.025835, 00:24:45.012 "iops": 3110.636700833955, 00:24:45.012 "mibps": 12.150924612632636, 00:24:45.012 "io_failed": 0, 00:24:45.012 "io_timeout": 0, 00:24:45.012 "avg_latency_us": 40628.9654987987, 00:24:45.012 "min_latency_us": 6262.328888888889, 00:24:45.012 "max_latency_us": 48739.36592592593 00:24:45.012 } 00:24:45.012 ], 00:24:45.012 "core_count": 1 00:24:45.012 } 00:24:45.012 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:45.271 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.271 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.271 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.271 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:45.271 "subsystems": [ 00:24:45.271 { 00:24:45.271 "subsystem": "keyring", 00:24:45.271 "config": [ 00:24:45.271 { 00:24:45.271 "method": "keyring_file_add_key", 00:24:45.271 "params": { 00:24:45.271 "name": "key0", 00:24:45.271 "path": "/tmp/tmp.pEwW7BeB6i" 00:24:45.271 } 00:24:45.271 } 00:24:45.271 ] 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "subsystem": "iobuf", 00:24:45.271 "config": [ 00:24:45.271 { 00:24:45.271 "method": "iobuf_set_options", 00:24:45.271 "params": { 00:24:45.271 "small_pool_count": 8192, 00:24:45.271 "large_pool_count": 1024, 00:24:45.271 "small_bufsize": 8192, 00:24:45.271 "large_bufsize": 135168, 00:24:45.271 "enable_numa": false 00:24:45.271 } 00:24:45.271 } 00:24:45.271 ] 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "subsystem": "sock", 00:24:45.271 "config": [ 00:24:45.271 { 00:24:45.271 "method": "sock_set_default_impl", 00:24:45.271 "params": { 00:24:45.271 "impl_name": "posix" 00:24:45.271 } 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "method": "sock_impl_set_options", 00:24:45.271 "params": { 00:24:45.271 "impl_name": "ssl", 00:24:45.271 "recv_buf_size": 4096, 00:24:45.271 "send_buf_size": 4096, 00:24:45.271 "enable_recv_pipe": true, 00:24:45.271 "enable_quickack": false, 00:24:45.271 "enable_placement_id": 0, 00:24:45.271 "enable_zerocopy_send_server": true, 00:24:45.271 "enable_zerocopy_send_client": false, 00:24:45.271 "zerocopy_threshold": 0, 00:24:45.271 "tls_version": 0, 00:24:45.271 "enable_ktls": false 00:24:45.271 } 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "method": "sock_impl_set_options", 00:24:45.271 "params": { 00:24:45.271 "impl_name": "posix", 00:24:45.271 "recv_buf_size": 2097152, 00:24:45.271 "send_buf_size": 2097152, 00:24:45.271 "enable_recv_pipe": true, 00:24:45.271 "enable_quickack": false, 00:24:45.271 "enable_placement_id": 0, 00:24:45.271 "enable_zerocopy_send_server": true, 00:24:45.271 "enable_zerocopy_send_client": false, 00:24:45.271 "zerocopy_threshold": 0, 00:24:45.271 "tls_version": 0, 00:24:45.271 "enable_ktls": false 00:24:45.271 } 00:24:45.271 } 00:24:45.271 ] 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "subsystem": "vmd", 00:24:45.271 "config": [] 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "subsystem": "accel", 00:24:45.271 "config": [ 00:24:45.271 { 00:24:45.271 "method": "accel_set_options", 00:24:45.271 "params": { 00:24:45.271 "small_cache_size": 128, 00:24:45.271 "large_cache_size": 16, 00:24:45.271 "task_count": 2048, 00:24:45.271 "sequence_count": 2048, 00:24:45.271 "buf_count": 2048 00:24:45.271 } 00:24:45.271 } 00:24:45.271 ] 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "subsystem": "bdev", 00:24:45.271 "config": [ 00:24:45.271 { 00:24:45.271 "method": "bdev_set_options", 00:24:45.271 "params": { 00:24:45.271 "bdev_io_pool_size": 65535, 00:24:45.271 "bdev_io_cache_size": 256, 00:24:45.271 "bdev_auto_examine": true, 00:24:45.271 "iobuf_small_cache_size": 128, 00:24:45.271 "iobuf_large_cache_size": 16 00:24:45.271 } 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "method": "bdev_raid_set_options", 00:24:45.271 "params": { 00:24:45.271 "process_window_size_kb": 1024, 00:24:45.271 "process_max_bandwidth_mb_sec": 0 00:24:45.271 } 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "method": "bdev_iscsi_set_options", 00:24:45.271 "params": { 00:24:45.271 "timeout_sec": 30 00:24:45.271 } 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "method": "bdev_nvme_set_options", 00:24:45.271 "params": { 00:24:45.271 "action_on_timeout": "none", 00:24:45.271 "timeout_us": 0, 00:24:45.271 "timeout_admin_us": 0, 00:24:45.271 "keep_alive_timeout_ms": 10000, 00:24:45.271 "arbitration_burst": 0, 00:24:45.271 "low_priority_weight": 0, 00:24:45.271 "medium_priority_weight": 0, 00:24:45.271 "high_priority_weight": 0, 00:24:45.271 "nvme_adminq_poll_period_us": 10000, 00:24:45.271 "nvme_ioq_poll_period_us": 0, 00:24:45.271 "io_queue_requests": 0, 00:24:45.271 "delay_cmd_submit": true, 00:24:45.271 "transport_retry_count": 4, 00:24:45.271 "bdev_retry_count": 3, 00:24:45.271 "transport_ack_timeout": 0, 00:24:45.271 "ctrlr_loss_timeout_sec": 0, 00:24:45.271 "reconnect_delay_sec": 0, 00:24:45.271 "fast_io_fail_timeout_sec": 0, 00:24:45.271 "disable_auto_failback": false, 00:24:45.271 "generate_uuids": false, 00:24:45.271 "transport_tos": 0, 00:24:45.271 "nvme_error_stat": false, 00:24:45.271 "rdma_srq_size": 0, 00:24:45.271 "io_path_stat": false, 00:24:45.271 "allow_accel_sequence": false, 00:24:45.271 "rdma_max_cq_size": 0, 00:24:45.271 "rdma_cm_event_timeout_ms": 0, 00:24:45.271 "dhchap_digests": [ 00:24:45.271 "sha256", 00:24:45.271 "sha384", 00:24:45.271 "sha512" 00:24:45.271 ], 00:24:45.271 "dhchap_dhgroups": [ 00:24:45.271 "null", 00:24:45.271 "ffdhe2048", 00:24:45.271 "ffdhe3072", 00:24:45.271 "ffdhe4096", 00:24:45.271 "ffdhe6144", 00:24:45.271 "ffdhe8192" 00:24:45.271 ] 00:24:45.271 } 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "method": "bdev_nvme_set_hotplug", 00:24:45.271 "params": { 00:24:45.271 "period_us": 100000, 00:24:45.271 "enable": false 00:24:45.271 } 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "method": "bdev_malloc_create", 00:24:45.271 "params": { 00:24:45.271 "name": "malloc0", 00:24:45.271 "num_blocks": 8192, 00:24:45.271 "block_size": 4096, 00:24:45.271 "physical_block_size": 4096, 00:24:45.271 "uuid": "d54ce3d3-6894-4d5c-aa69-29684e9ba098", 00:24:45.271 "optimal_io_boundary": 0, 00:24:45.271 "md_size": 0, 00:24:45.271 "dif_type": 0, 00:24:45.271 "dif_is_head_of_md": false, 00:24:45.271 "dif_pi_format": 0 00:24:45.271 } 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "method": "bdev_wait_for_examine" 00:24:45.271 } 00:24:45.271 ] 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "subsystem": "nbd", 00:24:45.271 "config": [] 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "subsystem": "scheduler", 00:24:45.271 "config": [ 00:24:45.271 { 00:24:45.271 "method": "framework_set_scheduler", 00:24:45.271 "params": { 00:24:45.271 "name": "static" 00:24:45.271 } 00:24:45.271 } 00:24:45.271 ] 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "subsystem": "nvmf", 00:24:45.271 "config": [ 00:24:45.271 { 00:24:45.271 "method": "nvmf_set_config", 00:24:45.271 "params": { 00:24:45.271 "discovery_filter": "match_any", 00:24:45.271 "admin_cmd_passthru": { 00:24:45.271 "identify_ctrlr": false 00:24:45.271 }, 00:24:45.271 "dhchap_digests": [ 00:24:45.271 "sha256", 00:24:45.271 "sha384", 00:24:45.271 "sha512" 00:24:45.271 ], 00:24:45.271 "dhchap_dhgroups": [ 00:24:45.271 "null", 00:24:45.271 "ffdhe2048", 00:24:45.271 "ffdhe3072", 00:24:45.271 "ffdhe4096", 00:24:45.271 "ffdhe6144", 00:24:45.271 "ffdhe8192" 00:24:45.271 ] 00:24:45.271 } 00:24:45.271 }, 00:24:45.271 { 00:24:45.271 "method": "nvmf_set_max_subsystems", 00:24:45.271 "params": { 00:24:45.272 "max_subsystems": 1024 00:24:45.272 } 00:24:45.272 }, 00:24:45.272 { 00:24:45.272 "method": "nvmf_set_crdt", 00:24:45.272 "params": { 00:24:45.272 "crdt1": 0, 00:24:45.272 "crdt2": 0, 00:24:45.272 "crdt3": 0 00:24:45.272 } 00:24:45.272 }, 00:24:45.272 { 00:24:45.272 "method": "nvmf_create_transport", 00:24:45.272 "params": { 00:24:45.272 "trtype": "TCP", 00:24:45.272 "max_queue_depth": 128, 00:24:45.272 "max_io_qpairs_per_ctrlr": 127, 00:24:45.272 "in_capsule_data_size": 4096, 00:24:45.272 "max_io_size": 131072, 00:24:45.272 "io_unit_size": 131072, 00:24:45.272 "max_aq_depth": 128, 00:24:45.272 "num_shared_buffers": 511, 00:24:45.272 "buf_cache_size": 4294967295, 00:24:45.272 "dif_insert_or_strip": false, 00:24:45.272 "zcopy": false, 00:24:45.272 "c2h_success": false, 00:24:45.272 "sock_priority": 0, 00:24:45.272 "abort_timeout_sec": 1, 00:24:45.272 "ack_timeout": 0, 00:24:45.272 "data_wr_pool_size": 0 00:24:45.272 } 00:24:45.272 }, 00:24:45.272 { 00:24:45.272 "method": "nvmf_create_subsystem", 00:24:45.272 "params": { 00:24:45.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.272 "allow_any_host": false, 00:24:45.272 "serial_number": "00000000000000000000", 00:24:45.272 "model_number": "SPDK bdev Controller", 00:24:45.272 "max_namespaces": 32, 00:24:45.272 "min_cntlid": 1, 00:24:45.272 "max_cntlid": 65519, 00:24:45.272 "ana_reporting": false 00:24:45.272 } 00:24:45.272 }, 00:24:45.272 { 00:24:45.272 "method": "nvmf_subsystem_add_host", 00:24:45.272 "params": { 00:24:45.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.272 "host": "nqn.2016-06.io.spdk:host1", 00:24:45.272 "psk": "key0" 00:24:45.272 } 00:24:45.272 }, 00:24:45.272 { 00:24:45.272 "method": "nvmf_subsystem_add_ns", 00:24:45.272 "params": { 00:24:45.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.272 "namespace": { 00:24:45.272 "nsid": 1, 00:24:45.272 "bdev_name": "malloc0", 00:24:45.272 "nguid": "D54CE3D368944D5CAA6929684E9BA098", 00:24:45.272 "uuid": "d54ce3d3-6894-4d5c-aa69-29684e9ba098", 00:24:45.272 "no_auto_visible": false 00:24:45.272 } 00:24:45.272 } 00:24:45.272 }, 00:24:45.272 { 00:24:45.272 "method": "nvmf_subsystem_add_listener", 00:24:45.272 "params": { 00:24:45.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.272 "listen_address": { 00:24:45.272 "trtype": "TCP", 00:24:45.272 "adrfam": "IPv4", 00:24:45.272 "traddr": "10.0.0.2", 00:24:45.272 "trsvcid": "4420" 00:24:45.272 }, 00:24:45.272 "secure_channel": false, 00:24:45.272 "sock_impl": "ssl" 00:24:45.272 } 00:24:45.272 } 00:24:45.272 ] 00:24:45.272 } 00:24:45.272 ] 00:24:45.272 }' 00:24:45.272 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:45.530 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:45.530 "subsystems": [ 00:24:45.531 { 00:24:45.531 "subsystem": "keyring", 00:24:45.531 "config": [ 00:24:45.531 { 00:24:45.531 "method": "keyring_file_add_key", 00:24:45.531 "params": { 00:24:45.531 "name": "key0", 00:24:45.531 "path": "/tmp/tmp.pEwW7BeB6i" 00:24:45.531 } 00:24:45.531 } 00:24:45.531 ] 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "subsystem": "iobuf", 00:24:45.531 "config": [ 00:24:45.531 { 00:24:45.531 "method": "iobuf_set_options", 00:24:45.531 "params": { 00:24:45.531 "small_pool_count": 8192, 00:24:45.531 "large_pool_count": 1024, 00:24:45.531 "small_bufsize": 8192, 00:24:45.531 "large_bufsize": 135168, 00:24:45.531 "enable_numa": false 00:24:45.531 } 00:24:45.531 } 00:24:45.531 ] 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "subsystem": "sock", 00:24:45.531 "config": [ 00:24:45.531 { 00:24:45.531 "method": "sock_set_default_impl", 00:24:45.531 "params": { 00:24:45.531 "impl_name": "posix" 00:24:45.531 } 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "method": "sock_impl_set_options", 00:24:45.531 "params": { 00:24:45.531 "impl_name": "ssl", 00:24:45.531 "recv_buf_size": 4096, 00:24:45.531 "send_buf_size": 4096, 00:24:45.531 "enable_recv_pipe": true, 00:24:45.531 "enable_quickack": false, 00:24:45.531 "enable_placement_id": 0, 00:24:45.531 "enable_zerocopy_send_server": true, 00:24:45.531 "enable_zerocopy_send_client": false, 00:24:45.531 "zerocopy_threshold": 0, 00:24:45.531 "tls_version": 0, 00:24:45.531 "enable_ktls": false 00:24:45.531 } 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "method": "sock_impl_set_options", 00:24:45.531 "params": { 00:24:45.531 "impl_name": "posix", 00:24:45.531 "recv_buf_size": 2097152, 00:24:45.531 "send_buf_size": 2097152, 00:24:45.531 "enable_recv_pipe": true, 00:24:45.531 "enable_quickack": false, 00:24:45.531 "enable_placement_id": 0, 00:24:45.531 "enable_zerocopy_send_server": true, 00:24:45.531 "enable_zerocopy_send_client": false, 00:24:45.531 "zerocopy_threshold": 0, 00:24:45.531 "tls_version": 0, 00:24:45.531 "enable_ktls": false 00:24:45.531 } 00:24:45.531 } 00:24:45.531 ] 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "subsystem": "vmd", 00:24:45.531 "config": [] 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "subsystem": "accel", 00:24:45.531 "config": [ 00:24:45.531 { 00:24:45.531 "method": "accel_set_options", 00:24:45.531 "params": { 00:24:45.531 "small_cache_size": 128, 00:24:45.531 "large_cache_size": 16, 00:24:45.531 "task_count": 2048, 00:24:45.531 "sequence_count": 2048, 00:24:45.531 "buf_count": 2048 00:24:45.531 } 00:24:45.531 } 00:24:45.531 ] 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "subsystem": "bdev", 00:24:45.531 "config": [ 00:24:45.531 { 00:24:45.531 "method": "bdev_set_options", 00:24:45.531 "params": { 00:24:45.531 "bdev_io_pool_size": 65535, 00:24:45.531 "bdev_io_cache_size": 256, 00:24:45.531 "bdev_auto_examine": true, 00:24:45.531 "iobuf_small_cache_size": 128, 00:24:45.531 "iobuf_large_cache_size": 16 00:24:45.531 } 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "method": "bdev_raid_set_options", 00:24:45.531 "params": { 00:24:45.531 "process_window_size_kb": 1024, 00:24:45.531 "process_max_bandwidth_mb_sec": 0 00:24:45.531 } 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "method": "bdev_iscsi_set_options", 00:24:45.531 "params": { 00:24:45.531 "timeout_sec": 30 00:24:45.531 } 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "method": "bdev_nvme_set_options", 00:24:45.531 "params": { 00:24:45.531 "action_on_timeout": "none", 00:24:45.531 "timeout_us": 0, 00:24:45.531 "timeout_admin_us": 0, 00:24:45.531 "keep_alive_timeout_ms": 10000, 00:24:45.531 "arbitration_burst": 0, 00:24:45.531 "low_priority_weight": 0, 00:24:45.531 "medium_priority_weight": 0, 00:24:45.531 "high_priority_weight": 0, 00:24:45.531 "nvme_adminq_poll_period_us": 10000, 00:24:45.531 "nvme_ioq_poll_period_us": 0, 00:24:45.531 "io_queue_requests": 512, 00:24:45.531 "delay_cmd_submit": true, 00:24:45.531 "transport_retry_count": 4, 00:24:45.531 "bdev_retry_count": 3, 00:24:45.531 "transport_ack_timeout": 0, 00:24:45.531 "ctrlr_loss_timeout_sec": 0, 00:24:45.531 "reconnect_delay_sec": 0, 00:24:45.531 "fast_io_fail_timeout_sec": 0, 00:24:45.531 "disable_auto_failback": false, 00:24:45.531 "generate_uuids": false, 00:24:45.531 "transport_tos": 0, 00:24:45.531 "nvme_error_stat": false, 00:24:45.531 "rdma_srq_size": 0, 00:24:45.531 "io_path_stat": false, 00:24:45.531 "allow_accel_sequence": false, 00:24:45.531 "rdma_max_cq_size": 0, 00:24:45.531 "rdma_cm_event_timeout_ms": 0, 00:24:45.531 "dhchap_digests": [ 00:24:45.531 "sha256", 00:24:45.531 "sha384", 00:24:45.531 "sha512" 00:24:45.531 ], 00:24:45.531 "dhchap_dhgroups": [ 00:24:45.531 "null", 00:24:45.531 "ffdhe2048", 00:24:45.531 "ffdhe3072", 00:24:45.531 "ffdhe4096", 00:24:45.531 "ffdhe6144", 00:24:45.531 "ffdhe8192" 00:24:45.531 ] 00:24:45.531 } 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "method": "bdev_nvme_attach_controller", 00:24:45.531 "params": { 00:24:45.531 "name": "nvme0", 00:24:45.531 "trtype": "TCP", 00:24:45.531 "adrfam": "IPv4", 00:24:45.531 "traddr": "10.0.0.2", 00:24:45.531 "trsvcid": "4420", 00:24:45.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.531 "prchk_reftag": false, 00:24:45.531 "prchk_guard": false, 00:24:45.531 "ctrlr_loss_timeout_sec": 0, 00:24:45.531 "reconnect_delay_sec": 0, 00:24:45.531 "fast_io_fail_timeout_sec": 0, 00:24:45.531 "psk": "key0", 00:24:45.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:45.531 "hdgst": false, 00:24:45.531 "ddgst": false, 00:24:45.531 "multipath": "multipath" 00:24:45.531 } 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "method": "bdev_nvme_set_hotplug", 00:24:45.531 "params": { 00:24:45.531 "period_us": 100000, 00:24:45.531 "enable": false 00:24:45.531 } 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "method": "bdev_enable_histogram", 00:24:45.531 "params": { 00:24:45.531 "name": "nvme0n1", 00:24:45.531 "enable": true 00:24:45.531 } 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "method": "bdev_wait_for_examine" 00:24:45.531 } 00:24:45.531 ] 00:24:45.531 }, 00:24:45.531 { 00:24:45.531 "subsystem": "nbd", 00:24:45.531 "config": [] 00:24:45.531 } 00:24:45.531 ] 00:24:45.531 }' 00:24:45.531 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 280876 00:24:45.531 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280876 ']' 00:24:45.531 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280876 00:24:45.531 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:45.531 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.531 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280876 00:24:45.531 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:45.531 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:45.531 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280876' 00:24:45.531 killing process with pid 280876 00:24:45.531 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280876 00:24:45.531 Received shutdown signal, test time was about 1.000000 seconds 00:24:45.531 00:24:45.531 Latency(us) 00:24:45.531 [2024-11-18T19:25:57.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.532 [2024-11-18T19:25:57.540Z] =================================================================================================================== 00:24:45.532 [2024-11-18T19:25:57.540Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.532 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280876 00:24:45.790 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 280739 00:24:45.790 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280739 ']' 00:24:45.790 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280739 00:24:45.790 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:45.790 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.790 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280739 00:24:45.790 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:45.790 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:45.790 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280739' 00:24:45.790 killing process with pid 280739 00:24:45.790 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280739 00:24:45.790 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280739 00:24:46.049 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:46.049 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:46.049 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:46.049 "subsystems": [ 00:24:46.049 { 00:24:46.049 "subsystem": "keyring", 00:24:46.049 "config": [ 00:24:46.049 { 00:24:46.049 "method": "keyring_file_add_key", 00:24:46.049 "params": { 00:24:46.049 "name": "key0", 00:24:46.049 "path": "/tmp/tmp.pEwW7BeB6i" 00:24:46.049 } 00:24:46.049 } 00:24:46.049 ] 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "subsystem": "iobuf", 00:24:46.049 "config": [ 00:24:46.049 { 00:24:46.049 "method": "iobuf_set_options", 00:24:46.049 "params": { 00:24:46.049 "small_pool_count": 8192, 00:24:46.049 "large_pool_count": 1024, 00:24:46.049 "small_bufsize": 8192, 00:24:46.049 "large_bufsize": 135168, 00:24:46.049 "enable_numa": false 00:24:46.049 } 00:24:46.049 } 00:24:46.049 ] 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "subsystem": "sock", 00:24:46.049 "config": [ 00:24:46.049 { 00:24:46.049 "method": "sock_set_default_impl", 00:24:46.049 "params": { 00:24:46.049 "impl_name": "posix" 00:24:46.049 } 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "method": "sock_impl_set_options", 00:24:46.049 "params": { 00:24:46.049 "impl_name": "ssl", 00:24:46.049 "recv_buf_size": 4096, 00:24:46.049 "send_buf_size": 4096, 00:24:46.049 "enable_recv_pipe": true, 00:24:46.049 "enable_quickack": false, 00:24:46.049 "enable_placement_id": 0, 00:24:46.049 "enable_zerocopy_send_server": true, 00:24:46.049 "enable_zerocopy_send_client": false, 00:24:46.049 "zerocopy_threshold": 0, 00:24:46.049 "tls_version": 0, 00:24:46.049 "enable_ktls": false 00:24:46.049 } 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "method": "sock_impl_set_options", 00:24:46.049 "params": { 00:24:46.049 "impl_name": "posix", 00:24:46.049 "recv_buf_size": 2097152, 00:24:46.049 "send_buf_size": 2097152, 00:24:46.049 "enable_recv_pipe": true, 00:24:46.049 "enable_quickack": false, 00:24:46.049 "enable_placement_id": 0, 00:24:46.049 "enable_zerocopy_send_server": true, 00:24:46.049 "enable_zerocopy_send_client": false, 00:24:46.049 "zerocopy_threshold": 0, 00:24:46.049 "tls_version": 0, 00:24:46.049 "enable_ktls": false 00:24:46.049 } 00:24:46.049 } 00:24:46.049 ] 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "subsystem": "vmd", 00:24:46.049 "config": [] 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "subsystem": "accel", 00:24:46.049 "config": [ 00:24:46.049 { 00:24:46.049 "method": "accel_set_options", 00:24:46.049 "params": { 00:24:46.049 "small_cache_size": 128, 00:24:46.049 "large_cache_size": 16, 00:24:46.049 "task_count": 2048, 00:24:46.049 "sequence_count": 2048, 00:24:46.049 "buf_count": 2048 00:24:46.049 } 00:24:46.049 } 00:24:46.049 ] 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "subsystem": "bdev", 00:24:46.049 "config": [ 00:24:46.049 { 00:24:46.049 "method": "bdev_set_options", 00:24:46.049 "params": { 00:24:46.049 "bdev_io_pool_size": 65535, 00:24:46.049 "bdev_io_cache_size": 256, 00:24:46.049 "bdev_auto_examine": true, 00:24:46.049 "iobuf_small_cache_size": 128, 00:24:46.049 "iobuf_large_cache_size": 16 00:24:46.049 } 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "method": "bdev_raid_set_options", 00:24:46.049 "params": { 00:24:46.049 "process_window_size_kb": 1024, 00:24:46.049 "process_max_bandwidth_mb_sec": 0 00:24:46.049 } 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "method": "bdev_iscsi_set_options", 00:24:46.049 "params": { 00:24:46.049 "timeout_sec": 30 00:24:46.049 } 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "method": "bdev_nvme_set_options", 00:24:46.049 "params": { 00:24:46.049 "action_on_timeout": "none", 00:24:46.049 "timeout_us": 0, 00:24:46.049 "timeout_admin_us": 0, 00:24:46.049 "keep_alive_timeout_ms": 10000, 00:24:46.049 "arbitration_burst": 0, 00:24:46.049 "low_priority_weight": 0, 00:24:46.049 "medium_priority_weight": 0, 00:24:46.049 "high_priority_weight": 0, 00:24:46.049 "nvme_adminq_poll_period_us": 10000, 00:24:46.049 "nvme_ioq_poll_period_us": 0, 00:24:46.049 "io_queue_requests": 0, 00:24:46.049 "delay_cmd_submit": true, 00:24:46.049 "transport_retry_count": 4, 00:24:46.049 "bdev_retry_count": 3, 00:24:46.049 "transport_ack_timeout": 0, 00:24:46.049 "ctrlr_loss_timeout_sec": 0, 00:24:46.049 "reconnect_delay_sec": 0, 00:24:46.049 "fast_io_fail_timeout_sec": 0, 00:24:46.049 "disable_auto_failback": false, 00:24:46.049 "generate_uuids": false, 00:24:46.049 "transport_tos": 0, 00:24:46.049 "nvme_error_stat": false, 00:24:46.049 "rdma_srq_size": 0, 00:24:46.049 "io_path_stat": false, 00:24:46.049 "allow_accel_sequence": false, 00:24:46.049 "rdma_max_cq_size": 0, 00:24:46.049 "rdma_cm_event_timeout_ms": 0, 00:24:46.049 "dhchap_digests": [ 00:24:46.049 "sha256", 00:24:46.049 "sha384", 00:24:46.049 "sha512" 00:24:46.049 ], 00:24:46.049 "dhchap_dhgroups": [ 00:24:46.049 "null", 00:24:46.049 "ffdhe2048", 00:24:46.049 "ffdhe3072", 00:24:46.049 "ffdhe4096", 00:24:46.049 "ffdhe6144", 00:24:46.049 "ffdhe8192" 00:24:46.049 ] 00:24:46.049 } 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "method": "bdev_nvme_set_hotplug", 00:24:46.049 "params": { 00:24:46.049 "period_us": 100000, 00:24:46.049 "enable": false 00:24:46.049 } 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "method": "bdev_malloc_create", 00:24:46.049 "params": { 00:24:46.049 "name": "malloc0", 00:24:46.049 "num_blocks": 8192, 00:24:46.049 "block_size": 4096, 00:24:46.049 "physical_block_size": 4096, 00:24:46.049 "uuid": "d54ce3d3-6894-4d5c-aa69-29684e9ba098", 00:24:46.049 "optimal_io_boundary": 0, 00:24:46.049 "md_size": 0, 00:24:46.049 "dif_type": 0, 00:24:46.049 "dif_is_head_of_md": false, 00:24:46.049 "dif_pi_format": 0 00:24:46.049 } 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "method": "bdev_wait_for_examine" 00:24:46.049 } 00:24:46.049 ] 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "subsystem": "nbd", 00:24:46.049 "config": [] 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "subsystem": "scheduler", 00:24:46.049 "config": [ 00:24:46.049 { 00:24:46.049 "method": "framework_set_scheduler", 00:24:46.049 "params": { 00:24:46.049 "name": "static" 00:24:46.049 } 00:24:46.049 } 00:24:46.049 ] 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "subsystem": "nvmf", 00:24:46.049 "config": [ 00:24:46.049 { 00:24:46.049 "method": "nvmf_set_config", 00:24:46.049 "params": { 00:24:46.049 "discovery_filter": "match_any", 00:24:46.049 "admin_cmd_passthru": { 00:24:46.049 "identify_ctrlr": false 00:24:46.049 }, 00:24:46.049 "dhchap_digests": [ 00:24:46.049 "sha256", 00:24:46.049 "sha384", 00:24:46.049 "sha512" 00:24:46.049 ], 00:24:46.049 "dhchap_dhgroups": [ 00:24:46.049 "null", 00:24:46.049 "ffdhe2048", 00:24:46.049 "ffdhe3072", 00:24:46.049 "ffdhe4096", 00:24:46.049 "ffdhe6144", 00:24:46.049 "ffdhe8192" 00:24:46.049 ] 00:24:46.049 } 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "method": "nvmf_set_max_subsystems", 00:24:46.049 "params": { 00:24:46.049 "max_subsystems": 1024 00:24:46.049 } 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "method": "nvmf_set_crdt", 00:24:46.049 "params": { 00:24:46.049 "crdt1": 0, 00:24:46.049 "crdt2": 0, 00:24:46.049 "crdt3": 0 00:24:46.049 } 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "method": "nvmf_create_transport", 00:24:46.049 "params": { 00:24:46.049 "trtype": "TCP", 00:24:46.049 "max_queue_depth": 128, 00:24:46.049 "max_io_qpairs_per_ctrlr": 127, 00:24:46.049 "in_capsule_data_size": 4096, 00:24:46.049 "max_io_size": 131072, 00:24:46.049 "io_unit_size": 131072, 00:24:46.049 "max_aq_depth": 128, 00:24:46.049 "num_shared_buffers": 511, 00:24:46.049 "buf_cache_size": 4294967295, 00:24:46.049 "dif_insert_or_strip": false, 00:24:46.049 "zcopy": false, 00:24:46.049 "c2h_success": false, 00:24:46.049 "sock_priority": 0, 00:24:46.049 "abort_timeout_sec": 1, 00:24:46.049 "ack_timeout": 0, 00:24:46.049 "data_wr_pool_size": 0 00:24:46.049 } 00:24:46.049 }, 00:24:46.049 { 00:24:46.049 "method": "nvmf_create_subsystem", 00:24:46.049 "params": { 00:24:46.049 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.050 "allow_any_host": false, 00:24:46.050 "serial_number": "00000000000000000000", 00:24:46.050 "model_number": "SPDK bdev Controller", 00:24:46.050 "max_namespaces": 32, 00:24:46.050 "min_cntlid": 1, 00:24:46.050 "max_cntlid": 65519, 00:24:46.050 "ana_reporting": false 00:24:46.050 } 00:24:46.050 }, 00:24:46.050 { 00:24:46.050 "method": "nvmf_subsystem_add_host", 00:24:46.050 "params": { 00:24:46.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.050 "host": "nqn.2016-06.io.spdk:host1", 00:24:46.050 "psk": "key0" 00:24:46.050 } 00:24:46.050 }, 00:24:46.050 { 00:24:46.050 "method": "nvmf_subsystem_add_ns", 00:24:46.050 "params": { 00:24:46.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.050 "namespace": { 00:24:46.050 "nsid": 1, 00:24:46.050 "bdev_name": "malloc0", 00:24:46.050 "nguid": "D54CE3D368944D5CAA6929684E9BA098", 00:24:46.050 "uuid": "d54ce3d3-6894-4d5c-aa69-29684e9ba098", 00:24:46.050 "no_auto_visible": false 00:24:46.050 } 00:24:46.050 } 00:24:46.050 }, 00:24:46.050 { 00:24:46.050 "method": "nvmf_subsystem_add_listener", 00:24:46.050 "params": { 00:24:46.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.050 "listen_address": { 00:24:46.050 "trtype": "TCP", 00:24:46.050 "adrfam": "IPv4", 00:24:46.050 "traddr": "10.0.0.2", 00:24:46.050 "trsvcid": "4420" 00:24:46.050 }, 00:24:46.050 "secure_channel": false, 00:24:46.050 "sock_impl": "ssl" 00:24:46.050 } 00:24:46.050 } 00:24:46.050 ] 00:24:46.050 } 00:24:46.050 ] 00:24:46.050 }' 00:24:46.050 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:46.050 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.050 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=281171 00:24:46.050 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:46.050 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 281171 00:24:46.050 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 281171 ']' 00:24:46.050 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.050 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.050 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.050 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.050 20:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.050 [2024-11-18 20:25:57.944734] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:46.050 [2024-11-18 20:25:57.944847] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.050 [2024-11-18 20:25:58.016294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.308 [2024-11-18 20:25:58.056864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.308 [2024-11-18 20:25:58.056921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.308 [2024-11-18 20:25:58.056934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.308 [2024-11-18 20:25:58.056945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.308 [2024-11-18 20:25:58.056954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.308 [2024-11-18 20:25:58.057535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.308 [2024-11-18 20:25:58.295905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.568 [2024-11-18 20:25:58.327830] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:46.568 [2024-11-18 20:25:58.328077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.136 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.136 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:47.136 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:47.136 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:47.136 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.136 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.136 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=281323 00:24:47.136 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 281323 /var/tmp/bdevperf.sock 00:24:47.136 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 281323 ']' 00:24:47.136 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.136 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:47.136 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.136 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:47.136 "subsystems": [ 00:24:47.136 { 00:24:47.136 "subsystem": "keyring", 00:24:47.136 "config": [ 00:24:47.136 { 00:24:47.136 "method": "keyring_file_add_key", 00:24:47.136 "params": { 00:24:47.136 "name": "key0", 00:24:47.136 "path": "/tmp/tmp.pEwW7BeB6i" 00:24:47.136 } 00:24:47.136 } 00:24:47.136 ] 00:24:47.136 }, 00:24:47.136 { 00:24:47.136 "subsystem": "iobuf", 00:24:47.136 "config": [ 00:24:47.136 { 00:24:47.136 "method": "iobuf_set_options", 00:24:47.136 "params": { 00:24:47.136 "small_pool_count": 8192, 00:24:47.136 "large_pool_count": 1024, 00:24:47.136 "small_bufsize": 8192, 00:24:47.136 "large_bufsize": 135168, 00:24:47.136 "enable_numa": false 00:24:47.136 } 00:24:47.136 } 00:24:47.136 ] 00:24:47.136 }, 00:24:47.136 { 00:24:47.136 "subsystem": "sock", 00:24:47.136 "config": [ 00:24:47.136 { 00:24:47.136 "method": "sock_set_default_impl", 00:24:47.136 "params": { 00:24:47.136 "impl_name": "posix" 00:24:47.136 } 00:24:47.136 }, 00:24:47.136 { 00:24:47.136 "method": "sock_impl_set_options", 00:24:47.136 "params": { 00:24:47.136 "impl_name": "ssl", 00:24:47.136 "recv_buf_size": 4096, 00:24:47.136 "send_buf_size": 4096, 00:24:47.136 "enable_recv_pipe": true, 00:24:47.136 "enable_quickack": false, 00:24:47.136 "enable_placement_id": 0, 00:24:47.136 "enable_zerocopy_send_server": true, 00:24:47.136 "enable_zerocopy_send_client": false, 00:24:47.136 "zerocopy_threshold": 0, 00:24:47.136 "tls_version": 0, 00:24:47.136 "enable_ktls": false 00:24:47.136 } 00:24:47.136 }, 00:24:47.136 { 00:24:47.136 "method": "sock_impl_set_options", 00:24:47.136 "params": { 00:24:47.136 "impl_name": "posix", 00:24:47.136 "recv_buf_size": 2097152, 00:24:47.136 "send_buf_size": 2097152, 00:24:47.136 "enable_recv_pipe": true, 00:24:47.136 "enable_quickack": false, 00:24:47.136 "enable_placement_id": 0, 00:24:47.136 "enable_zerocopy_send_server": true, 00:24:47.136 "enable_zerocopy_send_client": false, 00:24:47.136 "zerocopy_threshold": 0, 00:24:47.136 "tls_version": 0, 00:24:47.136 "enable_ktls": false 00:24:47.136 } 00:24:47.136 } 00:24:47.136 ] 00:24:47.136 }, 00:24:47.136 { 00:24:47.136 "subsystem": "vmd", 00:24:47.136 "config": [] 00:24:47.136 }, 00:24:47.136 { 00:24:47.136 "subsystem": "accel", 00:24:47.136 "config": [ 00:24:47.136 { 00:24:47.136 "method": "accel_set_options", 00:24:47.136 "params": { 00:24:47.136 "small_cache_size": 128, 00:24:47.136 "large_cache_size": 16, 00:24:47.136 "task_count": 2048, 00:24:47.136 "sequence_count": 2048, 00:24:47.136 "buf_count": 2048 00:24:47.136 } 00:24:47.136 } 00:24:47.136 ] 00:24:47.136 }, 00:24:47.136 { 00:24:47.136 "subsystem": "bdev", 00:24:47.136 "config": [ 00:24:47.136 { 00:24:47.136 "method": "bdev_set_options", 00:24:47.136 "params": { 00:24:47.136 "bdev_io_pool_size": 65535, 00:24:47.136 "bdev_io_cache_size": 256, 00:24:47.136 "bdev_auto_examine": true, 00:24:47.136 "iobuf_small_cache_size": 128, 00:24:47.136 "iobuf_large_cache_size": 16 00:24:47.136 } 00:24:47.136 }, 00:24:47.136 { 00:24:47.136 "method": "bdev_raid_set_options", 00:24:47.136 "params": { 00:24:47.136 "process_window_size_kb": 1024, 00:24:47.136 "process_max_bandwidth_mb_sec": 0 00:24:47.136 } 00:24:47.136 }, 00:24:47.136 { 00:24:47.136 "method": "bdev_iscsi_set_options", 00:24:47.136 "params": { 00:24:47.136 "timeout_sec": 30 00:24:47.136 } 00:24:47.136 }, 00:24:47.136 { 00:24:47.136 "method": "bdev_nvme_set_options", 00:24:47.136 "params": { 00:24:47.136 "action_on_timeout": "none", 00:24:47.136 "timeout_us": 0, 00:24:47.136 "timeout_admin_us": 0, 00:24:47.136 "keep_alive_timeout_ms": 10000, 00:24:47.136 "arbitration_burst": 0, 00:24:47.137 "low_priority_weight": 0, 00:24:47.137 "medium_priority_weight": 0, 00:24:47.137 "high_priority_weight": 0, 00:24:47.137 "nvme_adminq_poll_period_us": 10000, 00:24:47.137 "nvme_ioq_poll_period_us": 0, 00:24:47.137 "io_queue_requests": 512, 00:24:47.137 "delay_cmd_submit": true, 00:24:47.137 "transport_retry_count": 4, 00:24:47.137 "bdev_retry_count": 3, 00:24:47.137 "transport_ack_timeout": 0, 00:24:47.137 "ctrlr_loss_timeout_sec": 0, 00:24:47.137 "reconnect_delay_sec": 0, 00:24:47.137 "fast_io_fail_timeout_sec": 0, 00:24:47.137 "disable_auto_failback": false, 00:24:47.137 "generate_uuids": false, 00:24:47.137 "transport_tos": 0, 00:24:47.137 "nvme_error_stat": false, 00:24:47.137 "rdma_srq_size": 0, 00:24:47.137 "io_path_stat": false, 00:24:47.137 "allow_accel_sequence": false, 00:24:47.137 "rdma_max_cq_size": 0, 00:24:47.137 "rdma_cm_event_timeout_ms": 0, 00:24:47.137 "dhchap_digests": [ 00:24:47.137 "sha256", 00:24:47.137 "sha384", 00:24:47.137 "sha512" 00:24:47.137 ], 00:24:47.137 "dhchap_dhgroups": [ 00:24:47.137 "null", 00:24:47.137 "ffdhe2048", 00:24:47.137 "ffdhe3072", 00:24:47.137 "ffdhe4096", 00:24:47.137 "ffdhe6144", 00:24:47.137 "ffdhe8192" 00:24:47.137 ] 00:24:47.137 } 00:24:47.137 }, 00:24:47.137 { 00:24:47.137 "method": "bdev_nvme_attach_controller", 00:24:47.137 "params": { 00:24:47.137 "name": "nvme0", 00:24:47.137 "trtype": "TCP", 00:24:47.137 "adrfam": "IPv4", 00:24:47.137 "traddr": "10.0.0.2", 00:24:47.137 "trsvcid": "4420", 00:24:47.137 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.137 "prchk_reftag": false, 00:24:47.137 "prchk_guard": false, 00:24:47.137 "ctrlr_loss_timeout_sec": 0, 00:24:47.137 "reconnect_delay_sec": 0, 00:24:47.137 "fast_io_fail_timeout_sec": 0, 00:24:47.137 "psk": "key0", 00:24:47.137 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:47.137 "hdgst": false, 00:24:47.137 "ddgst": false, 00:24:47.137 "multipath": "multipath" 00:24:47.137 } 00:24:47.137 }, 00:24:47.137 { 00:24:47.137 "method": "bdev_nvme_set_hotplug", 00:24:47.137 "params": { 00:24:47.137 "period_us": 100000, 00:24:47.137 "enable": false 00:24:47.137 } 00:24:47.137 }, 00:24:47.137 { 00:24:47.137 "method": "bdev_enable_histogram", 00:24:47.137 "params": { 00:24:47.137 "name": "nvme0n1", 00:24:47.137 "enable": true 00:24:47.137 } 00:24:47.137 }, 00:24:47.137 { 00:24:47.137 "method": "bdev_wait_for_examine" 00:24:47.137 } 00:24:47.137 ] 00:24:47.137 }, 00:24:47.137 { 00:24:47.137 "subsystem": "nbd", 00:24:47.137 "config": [] 00:24:47.137 } 00:24:47.137 ] 00:24:47.137 }' 00:24:47.137 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.137 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.137 20:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.137 [2024-11-18 20:25:59.010783] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:47.137 [2024-11-18 20:25:59.010876] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid281323 ] 00:24:47.137 [2024-11-18 20:25:59.076348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.137 [2024-11-18 20:25:59.122311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.395 [2024-11-18 20:25:59.300577] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:47.653 20:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.653 20:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:47.653 20:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:47.653 20:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:47.911 20:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.911 20:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:47.911 Running I/O for 1 seconds... 00:24:48.852 3191.00 IOPS, 12.46 MiB/s 00:24:48.852 Latency(us) 00:24:48.852 [2024-11-18T19:26:00.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.852 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:48.852 Verification LBA range: start 0x0 length 0x2000 00:24:48.852 nvme0n1 : 1.02 3258.32 12.73 0.00 0.00 38944.80 6262.33 52817.16 00:24:48.852 [2024-11-18T19:26:00.860Z] =================================================================================================================== 00:24:48.852 [2024-11-18T19:26:00.861Z] Total : 3258.32 12.73 0.00 0.00 38944.80 6262.33 52817.16 00:24:48.853 { 00:24:48.853 "results": [ 00:24:48.853 { 00:24:48.853 "job": "nvme0n1", 00:24:48.853 "core_mask": "0x2", 00:24:48.853 "workload": "verify", 00:24:48.853 "status": "finished", 00:24:48.853 "verify_range": { 00:24:48.853 "start": 0, 00:24:48.853 "length": 8192 00:24:48.853 }, 00:24:48.853 "queue_depth": 128, 00:24:48.853 "io_size": 4096, 00:24:48.853 "runtime": 1.018624, 00:24:48.853 "iops": 3258.317102287007, 00:24:48.853 "mibps": 12.72780118080862, 00:24:48.853 "io_failed": 0, 00:24:48.853 "io_timeout": 0, 00:24:48.853 "avg_latency_us": 38944.80072668028, 00:24:48.853 "min_latency_us": 6262.328888888889, 00:24:48.853 "max_latency_us": 52817.16148148148 00:24:48.853 } 00:24:48.853 ], 00:24:48.853 "core_count": 1 00:24:48.853 } 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:49.112 nvmf_trace.0 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 281323 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 281323 ']' 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 281323 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 281323 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 281323' 00:24:49.112 killing process with pid 281323 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 281323 00:24:49.112 Received shutdown signal, test time was about 1.000000 seconds 00:24:49.112 00:24:49.112 Latency(us) 00:24:49.112 [2024-11-18T19:26:01.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.112 [2024-11-18T19:26:01.120Z] =================================================================================================================== 00:24:49.112 [2024-11-18T19:26:01.120Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:49.112 20:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 281323 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:49.372 rmmod nvme_tcp 00:24:49.372 rmmod nvme_fabrics 00:24:49.372 rmmod nvme_keyring 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 281171 ']' 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 281171 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 281171 ']' 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 281171 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 281171 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 281171' 00:24:49.372 killing process with pid 281171 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 281171 00:24:49.372 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 281171 00:24:49.632 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:49.632 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:49.632 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:49.632 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:49.632 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:49.632 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:49.632 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:49.632 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:49.632 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:49.632 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.632 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.632 20:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.541 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:51.541 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.dqQDTyk177 /tmp/tmp.issWGZ7FjK /tmp/tmp.pEwW7BeB6i 00:24:51.541 00:24:51.541 real 1m21.968s 00:24:51.541 user 2m15.694s 00:24:51.541 sys 0m25.116s 00:24:51.541 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:51.541 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.541 ************************************ 00:24:51.541 END TEST nvmf_tls 00:24:51.541 ************************************ 00:24:51.541 20:26:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:51.541 20:26:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:51.541 20:26:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:51.541 20:26:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:51.800 ************************************ 00:24:51.800 START TEST nvmf_fips 00:24:51.800 ************************************ 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:51.800 * Looking for test storage... 00:24:51.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:51.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.800 --rc genhtml_branch_coverage=1 00:24:51.800 --rc genhtml_function_coverage=1 00:24:51.800 --rc genhtml_legend=1 00:24:51.800 --rc geninfo_all_blocks=1 00:24:51.800 --rc geninfo_unexecuted_blocks=1 00:24:51.800 00:24:51.800 ' 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:51.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.800 --rc genhtml_branch_coverage=1 00:24:51.800 --rc genhtml_function_coverage=1 00:24:51.800 --rc genhtml_legend=1 00:24:51.800 --rc geninfo_all_blocks=1 00:24:51.800 --rc geninfo_unexecuted_blocks=1 00:24:51.800 00:24:51.800 ' 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:51.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.800 --rc genhtml_branch_coverage=1 00:24:51.800 --rc genhtml_function_coverage=1 00:24:51.800 --rc genhtml_legend=1 00:24:51.800 --rc geninfo_all_blocks=1 00:24:51.800 --rc geninfo_unexecuted_blocks=1 00:24:51.800 00:24:51.800 ' 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:51.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.800 --rc genhtml_branch_coverage=1 00:24:51.800 --rc genhtml_function_coverage=1 00:24:51.800 --rc genhtml_legend=1 00:24:51.800 --rc geninfo_all_blocks=1 00:24:51.800 --rc geninfo_unexecuted_blocks=1 00:24:51.800 00:24:51.800 ' 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.800 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:51.801 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:52.060 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:52.061 Error setting digest 00:24:52.061 40C2A3F8E97F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:52.061 40C2A3F8E97F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:52.061 20:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:54.596 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:54.596 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:54.596 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:54.596 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.596 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:54.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:24:54.597 00:24:54.597 --- 10.0.0.2 ping statistics --- 00:24:54.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.597 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:24:54.597 00:24:54.597 --- 10.0.0.1 ping statistics --- 00:24:54.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.597 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=283568 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 283568 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 283568 ']' 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:54.597 [2024-11-18 20:26:06.349326] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:54.597 [2024-11-18 20:26:06.349413] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.597 [2024-11-18 20:26:06.419323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.597 [2024-11-18 20:26:06.463009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.597 [2024-11-18 20:26:06.463063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.597 [2024-11-18 20:26:06.463092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.597 [2024-11-18 20:26:06.463104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.597 [2024-11-18 20:26:06.463114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.597 [2024-11-18 20:26:06.463675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.iJS 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.iJS 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.iJS 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.iJS 00:24:54.597 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:54.855 [2024-11-18 20:26:06.844337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.855 [2024-11-18 20:26:06.860347] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:54.855 [2024-11-18 20:26:06.860548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.113 malloc0 00:24:55.113 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:55.113 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=283709 00:24:55.113 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:55.113 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 283709 /var/tmp/bdevperf.sock 00:24:55.113 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 283709 ']' 00:24:55.113 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:55.113 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:55.113 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:55.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:55.113 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:55.113 20:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:55.113 [2024-11-18 20:26:06.985096] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:55.113 [2024-11-18 20:26:06.985181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283709 ] 00:24:55.113 [2024-11-18 20:26:07.050274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.113 [2024-11-18 20:26:07.095285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.371 20:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.371 20:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:55.371 20:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.iJS 00:24:55.629 20:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:55.886 [2024-11-18 20:26:07.723815] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:55.886 TLSTESTn1 00:24:55.886 20:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:56.146 Running I/O for 10 seconds... 00:24:58.041 3235.00 IOPS, 12.64 MiB/s [2024-11-18T19:26:10.989Z] 3179.00 IOPS, 12.42 MiB/s [2024-11-18T19:26:12.371Z] 3229.33 IOPS, 12.61 MiB/s [2024-11-18T19:26:12.941Z] 3228.50 IOPS, 12.61 MiB/s [2024-11-18T19:26:14.318Z] 3255.80 IOPS, 12.72 MiB/s [2024-11-18T19:26:15.255Z] 3252.50 IOPS, 12.71 MiB/s [2024-11-18T19:26:16.194Z] 3266.43 IOPS, 12.76 MiB/s [2024-11-18T19:26:17.131Z] 3271.88 IOPS, 12.78 MiB/s [2024-11-18T19:26:18.068Z] 3258.44 IOPS, 12.73 MiB/s [2024-11-18T19:26:18.068Z] 3277.00 IOPS, 12.80 MiB/s 00:25:06.060 Latency(us) 00:25:06.060 [2024-11-18T19:26:18.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.060 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:06.060 Verification LBA range: start 0x0 length 0x2000 00:25:06.060 TLSTESTn1 : 10.02 3284.00 12.83 0.00 0.00 38918.03 6189.51 32622.36 00:25:06.060 [2024-11-18T19:26:18.068Z] =================================================================================================================== 00:25:06.060 [2024-11-18T19:26:18.068Z] Total : 3284.00 12.83 0.00 0.00 38918.03 6189.51 32622.36 00:25:06.060 { 00:25:06.060 "results": [ 00:25:06.060 { 00:25:06.060 "job": "TLSTESTn1", 00:25:06.060 "core_mask": "0x4", 00:25:06.060 "workload": "verify", 00:25:06.060 "status": "finished", 00:25:06.060 "verify_range": { 00:25:06.060 "start": 0, 00:25:06.060 "length": 8192 00:25:06.060 }, 00:25:06.060 "queue_depth": 128, 00:25:06.060 "io_size": 4096, 00:25:06.060 "runtime": 10.017344, 00:25:06.060 "iops": 3284.004223075498, 00:25:06.060 "mibps": 12.828141496388664, 00:25:06.060 "io_failed": 0, 00:25:06.060 "io_timeout": 0, 00:25:06.060 "avg_latency_us": 38918.025445886655, 00:25:06.060 "min_latency_us": 6189.511111111111, 00:25:06.060 "max_latency_us": 32622.364444444444 00:25:06.060 } 00:25:06.060 ], 00:25:06.060 "core_count": 1 00:25:06.060 } 00:25:06.060 20:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:06.060 20:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:06.060 20:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:25:06.060 20:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:25:06.060 20:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:06.060 20:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:06.060 20:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:06.060 20:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:06.060 20:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:06.060 20:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:06.060 nvmf_trace.0 00:25:06.060 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:25:06.061 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 283709 00:25:06.061 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 283709 ']' 00:25:06.061 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 283709 00:25:06.061 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:06.061 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.061 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283709 00:25:06.320 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:06.320 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:06.320 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283709' 00:25:06.320 killing process with pid 283709 00:25:06.320 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 283709 00:25:06.320 Received shutdown signal, test time was about 10.000000 seconds 00:25:06.320 00:25:06.320 Latency(us) 00:25:06.320 [2024-11-18T19:26:18.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.320 [2024-11-18T19:26:18.328Z] =================================================================================================================== 00:25:06.320 [2024-11-18T19:26:18.328Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.320 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 283709 00:25:06.320 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:06.320 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:06.320 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:06.320 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.320 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:06.320 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.320 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.320 rmmod nvme_tcp 00:25:06.320 rmmod nvme_fabrics 00:25:06.320 rmmod nvme_keyring 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 283568 ']' 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 283568 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 283568 ']' 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 283568 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283568 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283568' 00:25:06.579 killing process with pid 283568 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 283568 00:25:06.579 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 283568 00:25:06.838 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:06.838 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:06.838 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:06.838 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:06.838 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:25:06.838 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:06.838 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:25:06.838 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.838 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.838 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.838 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.838 20:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.740 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:08.740 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.iJS 00:25:08.740 00:25:08.740 real 0m17.083s 00:25:08.740 user 0m22.720s 00:25:08.740 sys 0m5.257s 00:25:08.740 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.740 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:08.740 ************************************ 00:25:08.740 END TEST nvmf_fips 00:25:08.740 ************************************ 00:25:08.740 20:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:08.740 20:26:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:08.741 20:26:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.741 20:26:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:08.741 ************************************ 00:25:08.741 START TEST nvmf_control_msg_list 00:25:08.741 ************************************ 00:25:08.741 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:09.000 * Looking for test storage... 00:25:09.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:09.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.000 --rc genhtml_branch_coverage=1 00:25:09.000 --rc genhtml_function_coverage=1 00:25:09.000 --rc genhtml_legend=1 00:25:09.000 --rc geninfo_all_blocks=1 00:25:09.000 --rc geninfo_unexecuted_blocks=1 00:25:09.000 00:25:09.000 ' 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:09.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.000 --rc genhtml_branch_coverage=1 00:25:09.000 --rc genhtml_function_coverage=1 00:25:09.000 --rc genhtml_legend=1 00:25:09.000 --rc geninfo_all_blocks=1 00:25:09.000 --rc geninfo_unexecuted_blocks=1 00:25:09.000 00:25:09.000 ' 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:09.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.000 --rc genhtml_branch_coverage=1 00:25:09.000 --rc genhtml_function_coverage=1 00:25:09.000 --rc genhtml_legend=1 00:25:09.000 --rc geninfo_all_blocks=1 00:25:09.000 --rc geninfo_unexecuted_blocks=1 00:25:09.000 00:25:09.000 ' 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:09.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.000 --rc genhtml_branch_coverage=1 00:25:09.000 --rc genhtml_function_coverage=1 00:25:09.000 --rc genhtml_legend=1 00:25:09.000 --rc geninfo_all_blocks=1 00:25:09.000 --rc geninfo_unexecuted_blocks=1 00:25:09.000 00:25:09.000 ' 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.000 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:09.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:09.001 20:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.533 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:11.534 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:11.534 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:11.534 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:11.534 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.534 20:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:11.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:25:11.534 00:25:11.534 --- 10.0.0.2 ping statistics --- 00:25:11.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.534 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:25:11.534 00:25:11.534 --- 10.0.0.1 ping statistics --- 00:25:11.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.534 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=286973 00:25:11.534 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 286973 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 286973 ']' 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:11.535 [2024-11-18 20:26:23.186642] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:25:11.535 [2024-11-18 20:26:23.186717] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.535 [2024-11-18 20:26:23.257302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.535 [2024-11-18 20:26:23.301053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.535 [2024-11-18 20:26:23.301108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.535 [2024-11-18 20:26:23.301137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.535 [2024-11-18 20:26:23.301149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.535 [2024-11-18 20:26:23.301158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.535 [2024-11-18 20:26:23.301790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:11.535 [2024-11-18 20:26:23.439671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:11.535 Malloc0 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:11.535 [2024-11-18 20:26:23.478524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=287080 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=287083 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=287085 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:11.535 20:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 287080 00:25:11.795 [2024-11-18 20:26:23.557587] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:11.795 [2024-11-18 20:26:23.557955] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:11.795 [2024-11-18 20:26:23.558236] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:12.734 Initializing NVMe Controllers 00:25:12.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:12.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:12.734 Initialization complete. Launching workers. 00:25:12.734 ======================================================== 00:25:12.734 Latency(us) 00:25:12.734 Device Information : IOPS MiB/s Average min max 00:25:12.734 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 179.00 0.70 5692.34 215.85 40960.28 00:25:12.734 ======================================================== 00:25:12.734 Total : 179.00 0.70 5692.34 215.85 40960.28 00:25:12.734 00:25:12.734 [2024-11-18 20:26:24.679508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126a980 is same with the state(6) to be set 00:25:12.734 Initializing NVMe Controllers 00:25:12.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:12.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:12.734 Initialization complete. Launching workers. 00:25:12.734 ======================================================== 00:25:12.734 Latency(us) 00:25:12.734 Device Information : IOPS MiB/s Average min max 00:25:12.734 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 259.00 1.01 3861.52 161.72 40940.54 00:25:12.734 ======================================================== 00:25:12.734 Total : 259.00 1.01 3861.52 161.72 40940.54 00:25:12.734 00:25:12.734 [2024-11-18 20:26:24.740806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12701e0 is same with the state(6) to be set 00:25:12.992 Initializing NVMe Controllers 00:25:12.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:12.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:12.992 Initialization complete. Launching workers. 00:25:12.992 ======================================================== 00:25:12.992 Latency(us) 00:25:12.992 Device Information : IOPS MiB/s Average min max 00:25:12.992 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3885.00 15.18 257.02 149.65 487.52 00:25:12.992 ======================================================== 00:25:12.992 Total : 3885.00 15.18 257.02 149.65 487.52 00:25:12.992 00:25:12.992 [2024-11-18 20:26:24.780777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ae50 is same with the state(6) to be set 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 287083 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 287085 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:12.992 rmmod nvme_tcp 00:25:12.992 rmmod nvme_fabrics 00:25:12.992 rmmod nvme_keyring 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 286973 ']' 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 286973 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 286973 ']' 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 286973 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286973 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286973' 00:25:12.992 killing process with pid 286973 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 286973 00:25:12.992 20:26:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 286973 00:25:13.253 20:26:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:13.253 20:26:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:13.253 20:26:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:13.253 20:26:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:13.253 20:26:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:13.253 20:26:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:13.253 20:26:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:13.253 20:26:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:13.253 20:26:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:13.253 20:26:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.253 20:26:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.253 20:26:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.164 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:15.164 00:25:15.164 real 0m6.394s 00:25:15.164 user 0m5.905s 00:25:15.164 sys 0m2.584s 00:25:15.164 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:15.164 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:15.164 ************************************ 00:25:15.164 END TEST nvmf_control_msg_list 00:25:15.164 ************************************ 00:25:15.164 20:26:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:15.164 20:26:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:15.164 20:26:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:15.164 20:26:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:15.164 ************************************ 00:25:15.164 START TEST nvmf_wait_for_buf 00:25:15.164 ************************************ 00:25:15.164 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:15.423 * Looking for test storage... 00:25:15.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:15.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.423 --rc genhtml_branch_coverage=1 00:25:15.423 --rc genhtml_function_coverage=1 00:25:15.423 --rc genhtml_legend=1 00:25:15.423 --rc geninfo_all_blocks=1 00:25:15.423 --rc geninfo_unexecuted_blocks=1 00:25:15.423 00:25:15.423 ' 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:15.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.423 --rc genhtml_branch_coverage=1 00:25:15.423 --rc genhtml_function_coverage=1 00:25:15.423 --rc genhtml_legend=1 00:25:15.423 --rc geninfo_all_blocks=1 00:25:15.423 --rc geninfo_unexecuted_blocks=1 00:25:15.423 00:25:15.423 ' 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:15.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.423 --rc genhtml_branch_coverage=1 00:25:15.423 --rc genhtml_function_coverage=1 00:25:15.423 --rc genhtml_legend=1 00:25:15.423 --rc geninfo_all_blocks=1 00:25:15.423 --rc geninfo_unexecuted_blocks=1 00:25:15.423 00:25:15.423 ' 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:15.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.423 --rc genhtml_branch_coverage=1 00:25:15.423 --rc genhtml_function_coverage=1 00:25:15.423 --rc genhtml_legend=1 00:25:15.423 --rc geninfo_all_blocks=1 00:25:15.423 --rc geninfo_unexecuted_blocks=1 00:25:15.423 00:25:15.423 ' 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.423 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:15.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:15.424 20:26:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:17.961 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:17.961 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:17.961 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:17.961 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:17.961 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:17.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:25:17.962 00:25:17.962 --- 10.0.0.2 ping statistics --- 00:25:17.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.962 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:25:17.962 00:25:17.962 --- 10.0.0.1 ping statistics --- 00:25:17.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.962 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=289183 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 289183 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 289183 ']' 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.962 [2024-11-18 20:26:29.627517] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:25:17.962 [2024-11-18 20:26:29.627609] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.962 [2024-11-18 20:26:29.705390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.962 [2024-11-18 20:26:29.750670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.962 [2024-11-18 20:26:29.750727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.962 [2024-11-18 20:26:29.750756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.962 [2024-11-18 20:26:29.750769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.962 [2024-11-18 20:26:29.750779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.962 [2024-11-18 20:26:29.751419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.962 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:18.221 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.221 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:18.221 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.221 20:26:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:18.221 Malloc0 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:18.221 [2024-11-18 20:26:30.010814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:18.221 [2024-11-18 20:26:30.035016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.221 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.222 20:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:18.222 [2024-11-18 20:26:30.120786] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:19.599 Initializing NVMe Controllers 00:25:19.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:19.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:19.599 Initialization complete. Launching workers. 00:25:19.599 ======================================================== 00:25:19.599 Latency(us) 00:25:19.599 Device Information : IOPS MiB/s Average min max 00:25:19.599 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32261.17 7995.32 63841.41 00:25:19.599 ======================================================== 00:25:19.599 Total : 129.00 16.12 32261.17 7995.32 63841.41 00:25:19.599 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:19.860 rmmod nvme_tcp 00:25:19.860 rmmod nvme_fabrics 00:25:19.860 rmmod nvme_keyring 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 289183 ']' 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 289183 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 289183 ']' 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 289183 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 289183 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 289183' 00:25:19.860 killing process with pid 289183 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 289183 00:25:19.860 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 289183 00:25:20.119 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:20.119 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:20.119 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:20.119 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:20.119 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:20.120 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:20.120 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:20.120 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:20.120 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:20.120 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.120 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.120 20:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.025 20:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:22.025 00:25:22.025 real 0m6.847s 00:25:22.025 user 0m3.299s 00:25:22.025 sys 0m2.012s 00:25:22.025 20:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.025 20:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:22.025 ************************************ 00:25:22.025 END TEST nvmf_wait_for_buf 00:25:22.025 ************************************ 00:25:22.025 20:26:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:22.025 20:26:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:22.025 20:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:22.025 20:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:22.025 20:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:22.283 ************************************ 00:25:22.283 START TEST nvmf_fuzz 00:25:22.283 ************************************ 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:22.283 * Looking for test storage... 00:25:22.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.283 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:22.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.284 --rc genhtml_branch_coverage=1 00:25:22.284 --rc genhtml_function_coverage=1 00:25:22.284 --rc genhtml_legend=1 00:25:22.284 --rc geninfo_all_blocks=1 00:25:22.284 --rc geninfo_unexecuted_blocks=1 00:25:22.284 00:25:22.284 ' 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:22.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.284 --rc genhtml_branch_coverage=1 00:25:22.284 --rc genhtml_function_coverage=1 00:25:22.284 --rc genhtml_legend=1 00:25:22.284 --rc geninfo_all_blocks=1 00:25:22.284 --rc geninfo_unexecuted_blocks=1 00:25:22.284 00:25:22.284 ' 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:22.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.284 --rc genhtml_branch_coverage=1 00:25:22.284 --rc genhtml_function_coverage=1 00:25:22.284 --rc genhtml_legend=1 00:25:22.284 --rc geninfo_all_blocks=1 00:25:22.284 --rc geninfo_unexecuted_blocks=1 00:25:22.284 00:25:22.284 ' 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:22.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.284 --rc genhtml_branch_coverage=1 00:25:22.284 --rc genhtml_function_coverage=1 00:25:22.284 --rc genhtml_legend=1 00:25:22.284 --rc geninfo_all_blocks=1 00:25:22.284 --rc geninfo_unexecuted_blocks=1 00:25:22.284 00:25:22.284 ' 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:22.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:22.284 20:26:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:24.815 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:24.815 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:24.815 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:24.815 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:24.815 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:24.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:25:24.816 00:25:24.816 --- 10.0.0.2 ping statistics --- 00:25:24.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.816 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:24.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:25:24.816 00:25:24.816 --- 10.0.0.1 ping statistics --- 00:25:24.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.816 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=291405 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 291405 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 291405 ']' 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.816 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.076 Malloc0 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:25.076 20:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:57.156 Fuzzing completed. Shutting down the fuzz application 00:25:57.156 00:25:57.156 Dumping successful admin opcodes: 00:25:57.156 8, 9, 10, 24, 00:25:57.156 Dumping successful io opcodes: 00:25:57.156 0, 9, 00:25:57.156 NS: 0x2000008eff00 I/O qp, Total commands completed: 502418, total successful commands: 2896, random_seed: 564835008 00:25:57.156 NS: 0x2000008eff00 admin qp, Total commands completed: 60352, total successful commands: 478, random_seed: 4196753856 00:25:57.156 20:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:57.156 Fuzzing completed. Shutting down the fuzz application 00:25:57.156 00:25:57.156 Dumping successful admin opcodes: 00:25:57.156 24, 00:25:57.156 Dumping successful io opcodes: 00:25:57.156 00:25:57.156 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 555730335 00:25:57.156 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 555855393 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:57.156 rmmod nvme_tcp 00:25:57.156 rmmod nvme_fabrics 00:25:57.156 rmmod nvme_keyring 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 291405 ']' 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 291405 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 291405 ']' 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 291405 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291405 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291405' 00:25:57.156 killing process with pid 291405 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 291405 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 291405 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.156 20:27:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.062 20:27:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:59.062 20:27:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:59.062 00:25:59.062 real 0m36.958s 00:25:59.062 user 0m50.966s 00:25:59.062 sys 0m14.912s 00:25:59.062 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:59.062 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:59.062 ************************************ 00:25:59.062 END TEST nvmf_fuzz 00:25:59.062 ************************************ 00:25:59.062 20:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:59.062 20:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:59.062 20:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.062 20:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:59.062 ************************************ 00:25:59.062 START TEST nvmf_multiconnection 00:25:59.062 ************************************ 00:25:59.062 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:59.321 * Looking for test storage... 00:25:59.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.321 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:59.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.322 --rc genhtml_branch_coverage=1 00:25:59.322 --rc genhtml_function_coverage=1 00:25:59.322 --rc genhtml_legend=1 00:25:59.322 --rc geninfo_all_blocks=1 00:25:59.322 --rc geninfo_unexecuted_blocks=1 00:25:59.322 00:25:59.322 ' 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:59.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.322 --rc genhtml_branch_coverage=1 00:25:59.322 --rc genhtml_function_coverage=1 00:25:59.322 --rc genhtml_legend=1 00:25:59.322 --rc geninfo_all_blocks=1 00:25:59.322 --rc geninfo_unexecuted_blocks=1 00:25:59.322 00:25:59.322 ' 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:59.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.322 --rc genhtml_branch_coverage=1 00:25:59.322 --rc genhtml_function_coverage=1 00:25:59.322 --rc genhtml_legend=1 00:25:59.322 --rc geninfo_all_blocks=1 00:25:59.322 --rc geninfo_unexecuted_blocks=1 00:25:59.322 00:25:59.322 ' 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:59.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.322 --rc genhtml_branch_coverage=1 00:25:59.322 --rc genhtml_function_coverage=1 00:25:59.322 --rc genhtml_legend=1 00:25:59.322 --rc geninfo_all_blocks=1 00:25:59.322 --rc geninfo_unexecuted_blocks=1 00:25:59.322 00:25:59.322 ' 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:59.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:59.322 20:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:01.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:01.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:01.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:01.858 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.858 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:01.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:26:01.859 00:26:01.859 --- 10.0.0.2 ping statistics --- 00:26:01.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.859 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:26:01.859 00:26:01.859 --- 10.0.0.1 ping statistics --- 00:26:01.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.859 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=297688 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 297688 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 297688 ']' 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.859 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:01.859 [2024-11-18 20:27:13.652563] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:26:01.859 [2024-11-18 20:27:13.652663] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.859 [2024-11-18 20:27:13.726862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:01.859 [2024-11-18 20:27:13.776938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.859 [2024-11-18 20:27:13.777007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.859 [2024-11-18 20:27:13.777021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.859 [2024-11-18 20:27:13.777032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.859 [2024-11-18 20:27:13.777041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.859 [2024-11-18 20:27:13.778662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.859 [2024-11-18 20:27:13.778790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:01.859 [2024-11-18 20:27:13.778815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:01.859 [2024-11-18 20:27:13.778818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.119 [2024-11-18 20:27:13.933288] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.119 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.119 Malloc1 00:26:02.120 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.120 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:02.120 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.120 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.120 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:02.120 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.120 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.120 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:02.120 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.120 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 [2024-11-18 20:27:14.002803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 Malloc2 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 Malloc3 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.120 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.379 Malloc4 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.379 Malloc5 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:02.379 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 Malloc6 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 Malloc7 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 Malloc8 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 Malloc9 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.380 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.638 Malloc10 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.638 Malloc11 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.638 20:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:03.205 20:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:03.205 20:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:03.205 20:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:03.205 20:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:03.205 20:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:05.741 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:05.741 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:05.741 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:26:05.741 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:05.741 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:05.741 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:05.741 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.741 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:05.999 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:05.999 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:05.999 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.999 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:05.999 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:07.902 20:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:07.902 20:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:07.902 20:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:26:07.902 20:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:07.902 20:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.902 20:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:07.902 20:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.902 20:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:08.836 20:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:08.836 20:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:08.836 20:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:08.836 20:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:08.836 20:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:10.743 20:27:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:10.743 20:27:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:10.743 20:27:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:26:10.743 20:27:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:10.744 20:27:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.744 20:27:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:10.744 20:27:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.744 20:27:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:11.684 20:27:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:11.684 20:27:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:11.684 20:27:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:11.684 20:27:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:11.684 20:27:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:13.589 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:13.589 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:13.589 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:26:13.589 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:13.589 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:13.589 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:13.589 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.589 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:14.528 20:27:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:14.528 20:27:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:14.528 20:27:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:14.528 20:27:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:14.528 20:27:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:16.434 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:16.434 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:16.434 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:16.434 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:16.434 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:16.434 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:16.434 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.434 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:17.000 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:17.000 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:17.000 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:17.000 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:17.000 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:19.538 20:27:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:19.538 20:27:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:19.538 20:27:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:19.538 20:27:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:19.538 20:27:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:19.538 20:27:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:19.538 20:27:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.538 20:27:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:19.797 20:27:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:19.797 20:27:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:19.797 20:27:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:19.797 20:27:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:19.797 20:27:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:21.698 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:21.698 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:21.698 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:21.698 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:21.698 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:21.698 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:21.698 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.698 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:22.632 20:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:22.632 20:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:22.632 20:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:22.632 20:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:22.632 20:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:24.535 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:24.535 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:24.535 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:24.535 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:24.535 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:24.535 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:24.535 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.535 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:25.473 20:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:25.473 20:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:25.473 20:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:25.473 20:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:25.473 20:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:27.374 20:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:27.374 20:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:27.374 20:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:27.633 20:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:27.633 20:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:27.633 20:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:27.633 20:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:27.633 20:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:28.199 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:28.199 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:28.199 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:28.199 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:28.199 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:30.734 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:30.734 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:30.734 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:30.734 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:30.734 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:30.734 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:30.734 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.734 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:31.304 20:27:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:31.304 20:27:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:31.304 20:27:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:31.304 20:27:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:31.304 20:27:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:33.210 20:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:33.210 20:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:33.210 20:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:33.210 20:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:33.210 20:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:33.210 20:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:33.210 20:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:33.210 [global] 00:26:33.210 thread=1 00:26:33.210 invalidate=1 00:26:33.210 rw=read 00:26:33.210 time_based=1 00:26:33.210 runtime=10 00:26:33.210 ioengine=libaio 00:26:33.210 direct=1 00:26:33.210 bs=262144 00:26:33.210 iodepth=64 00:26:33.210 norandommap=1 00:26:33.210 numjobs=1 00:26:33.210 00:26:33.210 [job0] 00:26:33.210 filename=/dev/nvme0n1 00:26:33.210 [job1] 00:26:33.210 filename=/dev/nvme10n1 00:26:33.210 [job2] 00:26:33.210 filename=/dev/nvme1n1 00:26:33.210 [job3] 00:26:33.210 filename=/dev/nvme2n1 00:26:33.210 [job4] 00:26:33.210 filename=/dev/nvme3n1 00:26:33.210 [job5] 00:26:33.210 filename=/dev/nvme4n1 00:26:33.210 [job6] 00:26:33.210 filename=/dev/nvme5n1 00:26:33.210 [job7] 00:26:33.210 filename=/dev/nvme6n1 00:26:33.210 [job8] 00:26:33.210 filename=/dev/nvme7n1 00:26:33.210 [job9] 00:26:33.210 filename=/dev/nvme8n1 00:26:33.210 [job10] 00:26:33.210 filename=/dev/nvme9n1 00:26:33.469 Could not set queue depth (nvme0n1) 00:26:33.469 Could not set queue depth (nvme10n1) 00:26:33.469 Could not set queue depth (nvme1n1) 00:26:33.469 Could not set queue depth (nvme2n1) 00:26:33.469 Could not set queue depth (nvme3n1) 00:26:33.469 Could not set queue depth (nvme4n1) 00:26:33.469 Could not set queue depth (nvme5n1) 00:26:33.469 Could not set queue depth (nvme6n1) 00:26:33.469 Could not set queue depth (nvme7n1) 00:26:33.469 Could not set queue depth (nvme8n1) 00:26:33.469 Could not set queue depth (nvme9n1) 00:26:33.469 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.469 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.469 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.469 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.469 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.469 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.469 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.469 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.469 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.469 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.469 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.469 fio-3.35 00:26:33.469 Starting 11 threads 00:26:45.686 00:26:45.686 job0: (groupid=0, jobs=1): err= 0: pid=302014: Mon Nov 18 20:27:56 2024 00:26:45.686 read: IOPS=184, BW=46.2MiB/s (48.4MB/s)(470MiB/10173msec) 00:26:45.686 slat (usec): min=8, max=230377, avg=3147.56, stdev=15487.00 00:26:45.686 clat (usec): min=1397, max=941294, avg=343074.90, stdev=186316.34 00:26:45.686 lat (usec): min=1428, max=941309, avg=346222.46, stdev=188135.88 00:26:45.686 clat percentiles (msec): 00:26:45.686 | 1.00th=[ 6], 5.00th=[ 26], 10.00th=[ 101], 20.00th=[ 209], 00:26:45.686 | 30.00th=[ 249], 40.00th=[ 288], 50.00th=[ 321], 60.00th=[ 372], 00:26:45.686 | 70.00th=[ 418], 80.00th=[ 468], 90.00th=[ 584], 95.00th=[ 701], 00:26:45.686 | 99.00th=[ 852], 99.50th=[ 877], 99.90th=[ 944], 99.95th=[ 944], 00:26:45.686 | 99.99th=[ 944] 00:26:45.686 bw ( KiB/s): min=14336, max=78690, per=6.33%, avg=46456.10, stdev=16740.87, samples=20 00:26:45.686 iops : min= 56, max= 307, avg=181.45, stdev=65.36, samples=20 00:26:45.686 lat (msec) : 2=0.11%, 4=0.43%, 10=1.92%, 20=0.75%, 50=4.20% 00:26:45.686 lat (msec) : 100=2.45%, 250=20.54%, 500=51.94%, 750=14.58%, 1000=3.09% 00:26:45.686 cpu : usr=0.10%, sys=0.54%, ctx=440, majf=0, minf=4097 00:26:45.686 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:45.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.686 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.686 issued rwts: total=1879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.686 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.686 job1: (groupid=0, jobs=1): err= 0: pid=302015: Mon Nov 18 20:27:56 2024 00:26:45.686 read: IOPS=280, BW=70.2MiB/s (73.6MB/s)(710MiB/10124msec) 00:26:45.686 slat (usec): min=8, max=358832, avg=2390.94, stdev=11909.94 00:26:45.686 clat (msec): min=18, max=675, avg=225.52, stdev=130.79 00:26:45.686 lat (msec): min=18, max=936, avg=227.91, stdev=132.35 00:26:45.686 clat percentiles (msec): 00:26:45.686 | 1.00th=[ 23], 5.00th=[ 51], 10.00th=[ 65], 20.00th=[ 106], 00:26:45.686 | 30.00th=[ 123], 40.00th=[ 188], 50.00th=[ 239], 60.00th=[ 264], 00:26:45.686 | 70.00th=[ 288], 80.00th=[ 305], 90.00th=[ 388], 95.00th=[ 498], 00:26:45.686 | 99.00th=[ 584], 99.50th=[ 625], 99.90th=[ 676], 99.95th=[ 676], 00:26:45.686 | 99.99th=[ 676] 00:26:45.686 bw ( KiB/s): min=34816, max=140569, per=9.69%, avg=71098.00, stdev=28687.41, samples=20 00:26:45.686 iops : min= 136, max= 549, avg=277.70, stdev=112.05, samples=20 00:26:45.686 lat (msec) : 20=0.18%, 50=4.47%, 100=14.15%, 250=36.36%, 500=40.02% 00:26:45.686 lat (msec) : 750=4.82% 00:26:45.686 cpu : usr=0.07%, sys=0.78%, ctx=461, majf=0, minf=4097 00:26:45.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:45.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.686 issued rwts: total=2841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.686 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.686 job2: (groupid=0, jobs=1): err= 0: pid=302016: Mon Nov 18 20:27:56 2024 00:26:45.686 read: IOPS=291, BW=72.8MiB/s (76.4MB/s)(735MiB/10089msec) 00:26:45.686 slat (usec): min=8, max=297905, avg=2702.08, stdev=12659.53 00:26:45.687 clat (msec): min=15, max=687, avg=216.84, stdev=144.52 00:26:45.687 lat (msec): min=15, max=687, avg=219.54, stdev=146.61 00:26:45.687 clat percentiles (msec): 00:26:45.687 | 1.00th=[ 20], 5.00th=[ 27], 10.00th=[ 37], 20.00th=[ 68], 00:26:45.687 | 30.00th=[ 138], 40.00th=[ 169], 50.00th=[ 194], 60.00th=[ 234], 00:26:45.687 | 70.00th=[ 284], 80.00th=[ 338], 90.00th=[ 414], 95.00th=[ 489], 00:26:45.687 | 99.00th=[ 634], 99.50th=[ 651], 99.90th=[ 684], 99.95th=[ 684], 00:26:45.687 | 99.99th=[ 684] 00:26:45.687 bw ( KiB/s): min=20480, max=157696, per=10.03%, avg=73618.25, stdev=38966.52, samples=20 00:26:45.687 iops : min= 80, max= 616, avg=287.55, stdev=152.21, samples=20 00:26:45.687 lat (msec) : 20=1.80%, 50=13.95%, 100=9.70%, 250=38.04%, 500=32.19% 00:26:45.687 lat (msec) : 750=4.32% 00:26:45.687 cpu : usr=0.11%, sys=0.95%, ctx=550, majf=0, minf=4097 00:26:45.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:45.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.687 issued rwts: total=2939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.687 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.687 job3: (groupid=0, jobs=1): err= 0: pid=302017: Mon Nov 18 20:27:56 2024 00:26:45.687 read: IOPS=160, BW=40.0MiB/s (42.0MB/s)(409MiB/10211msec) 00:26:45.687 slat (usec): min=8, max=345877, avg=5679.59, stdev=22536.53 00:26:45.687 clat (msec): min=84, max=1069, avg=393.75, stdev=162.28 00:26:45.687 lat (msec): min=84, max=1069, avg=399.43, stdev=163.83 00:26:45.687 clat percentiles (msec): 00:26:45.687 | 1.00th=[ 111], 5.00th=[ 203], 10.00th=[ 226], 20.00th=[ 253], 00:26:45.687 | 30.00th=[ 296], 40.00th=[ 334], 50.00th=[ 363], 60.00th=[ 397], 00:26:45.687 | 70.00th=[ 439], 80.00th=[ 506], 90.00th=[ 634], 95.00th=[ 726], 00:26:45.687 | 99.00th=[ 961], 99.50th=[ 969], 99.90th=[ 1070], 99.95th=[ 1070], 00:26:45.687 | 99.99th=[ 1070] 00:26:45.687 bw ( KiB/s): min=13312, max=66048, per=5.48%, avg=40214.35, stdev=15502.57, samples=20 00:26:45.687 iops : min= 52, max= 258, avg=157.05, stdev=60.58, samples=20 00:26:45.687 lat (msec) : 100=0.61%, 250=19.02%, 500=59.14%, 750=18.23%, 1000=2.69% 00:26:45.687 lat (msec) : 2000=0.31% 00:26:45.687 cpu : usr=0.09%, sys=0.55%, ctx=207, majf=0, minf=4097 00:26:45.687 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:26:45.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.687 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.687 issued rwts: total=1635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.687 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.687 job4: (groupid=0, jobs=1): err= 0: pid=302018: Mon Nov 18 20:27:56 2024 00:26:45.687 read: IOPS=341, BW=85.5MiB/s (89.6MB/s)(869MiB/10168msec) 00:26:45.687 slat (usec): min=12, max=376530, avg=2756.77, stdev=14582.46 00:26:45.687 clat (msec): min=23, max=732, avg=184.25, stdev=177.12 00:26:45.687 lat (msec): min=23, max=815, avg=187.00, stdev=179.61 00:26:45.687 clat percentiles (msec): 00:26:45.687 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 39], 00:26:45.687 | 30.00th=[ 43], 40.00th=[ 58], 50.00th=[ 110], 60.00th=[ 153], 00:26:45.687 | 70.00th=[ 247], 80.00th=[ 347], 90.00th=[ 481], 95.00th=[ 558], 00:26:45.687 | 99.00th=[ 659], 99.50th=[ 684], 99.90th=[ 693], 99.95th=[ 701], 00:26:45.687 | 99.99th=[ 735] 00:26:45.687 bw ( KiB/s): min=19968, max=377344, per=11.90%, avg=87355.25, stdev=88121.44, samples=20 00:26:45.687 iops : min= 78, max= 1474, avg=341.20, stdev=344.21, samples=20 00:26:45.687 lat (msec) : 50=36.70%, 100=11.33%, 250=22.46%, 500=22.17%, 750=7.33% 00:26:45.687 cpu : usr=0.22%, sys=1.04%, ctx=473, majf=0, minf=4097 00:26:45.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:45.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.687 issued rwts: total=3477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.687 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.687 job5: (groupid=0, jobs=1): err= 0: pid=302019: Mon Nov 18 20:27:56 2024 00:26:45.687 read: IOPS=406, BW=102MiB/s (106MB/s)(1028MiB/10124msec) 00:26:45.687 slat (usec): min=7, max=171881, avg=1812.86, stdev=8962.43 00:26:45.687 clat (usec): min=1661, max=554526, avg=155683.87, stdev=140686.15 00:26:45.687 lat (usec): min=1685, max=566661, avg=157496.72, stdev=142349.03 00:26:45.687 clat percentiles (msec): 00:26:45.687 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 17], 20.00th=[ 37], 00:26:45.687 | 30.00th=[ 47], 40.00th=[ 55], 50.00th=[ 85], 60.00th=[ 165], 00:26:45.687 | 70.00th=[ 257], 80.00th=[ 300], 90.00th=[ 359], 95.00th=[ 422], 00:26:45.687 | 99.00th=[ 502], 99.50th=[ 518], 99.90th=[ 542], 99.95th=[ 550], 00:26:45.687 | 99.99th=[ 558] 00:26:45.687 bw ( KiB/s): min=29636, max=321536, per=14.12%, avg=103625.80, stdev=83794.85, samples=20 00:26:45.687 iops : min= 115, max= 1256, avg=404.75, stdev=327.36, samples=20 00:26:45.687 lat (msec) : 2=0.15%, 4=0.95%, 10=3.16%, 20=7.49%, 50=22.60% 00:26:45.687 lat (msec) : 100=19.80%, 250=15.11%, 500=29.68%, 750=1.07% 00:26:45.687 cpu : usr=0.18%, sys=0.99%, ctx=1076, majf=0, minf=4098 00:26:45.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:45.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.687 issued rwts: total=4111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.687 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.687 job6: (groupid=0, jobs=1): err= 0: pid=302020: Mon Nov 18 20:27:56 2024 00:26:45.687 read: IOPS=170, BW=42.7MiB/s (44.8MB/s)(435MiB/10170msec) 00:26:45.687 slat (usec): min=12, max=241700, avg=5629.09, stdev=22056.91 00:26:45.687 clat (msec): min=11, max=849, avg=368.54, stdev=156.85 00:26:45.687 lat (msec): min=11, max=850, avg=374.17, stdev=158.91 00:26:45.687 clat percentiles (msec): 00:26:45.687 | 1.00th=[ 33], 5.00th=[ 165], 10.00th=[ 192], 20.00th=[ 239], 00:26:45.687 | 30.00th=[ 284], 40.00th=[ 317], 50.00th=[ 347], 60.00th=[ 380], 00:26:45.687 | 70.00th=[ 426], 80.00th=[ 498], 90.00th=[ 592], 95.00th=[ 693], 00:26:45.687 | 99.00th=[ 768], 99.50th=[ 768], 99.90th=[ 852], 99.95th=[ 852], 00:26:45.687 | 99.99th=[ 852] 00:26:45.687 bw ( KiB/s): min=17408, max=91136, per=5.84%, avg=42850.75, stdev=17196.62, samples=20 00:26:45.687 iops : min= 68, max= 356, avg=167.35, stdev=67.19, samples=20 00:26:45.687 lat (msec) : 20=0.12%, 50=1.96%, 100=1.61%, 250=18.01%, 500=59.21% 00:26:45.687 lat (msec) : 750=17.15%, 1000=1.96% 00:26:45.687 cpu : usr=0.06%, sys=0.65%, ctx=217, majf=0, minf=4097 00:26:45.687 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:26:45.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.687 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.687 issued rwts: total=1738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.687 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.687 job7: (groupid=0, jobs=1): err= 0: pid=302021: Mon Nov 18 20:27:56 2024 00:26:45.687 read: IOPS=208, BW=52.1MiB/s (54.7MB/s)(530MiB/10171msec) 00:26:45.687 slat (usec): min=8, max=224006, avg=3008.77, stdev=14485.60 00:26:45.687 clat (usec): min=1776, max=857392, avg=303642.04, stdev=127466.96 00:26:45.687 lat (usec): min=1836, max=857448, avg=306650.80, stdev=128970.31 00:26:45.687 clat percentiles (msec): 00:26:45.687 | 1.00th=[ 47], 5.00th=[ 120], 10.00th=[ 153], 20.00th=[ 190], 00:26:45.687 | 30.00th=[ 241], 40.00th=[ 271], 50.00th=[ 296], 60.00th=[ 317], 00:26:45.687 | 70.00th=[ 355], 80.00th=[ 405], 90.00th=[ 460], 95.00th=[ 535], 00:26:45.687 | 99.00th=[ 651], 99.50th=[ 776], 99.90th=[ 802], 99.95th=[ 802], 00:26:45.687 | 99.99th=[ 860] 00:26:45.687 bw ( KiB/s): min=20480, max=103936, per=7.18%, avg=52678.15, stdev=19377.76, samples=20 00:26:45.687 iops : min= 80, max= 406, avg=205.75, stdev=75.68, samples=20 00:26:45.687 lat (msec) : 2=0.09%, 4=0.14%, 20=0.05%, 50=0.99%, 100=2.45% 00:26:45.687 lat (msec) : 250=30.55%, 500=58.79%, 750=6.18%, 1000=0.75% 00:26:45.687 cpu : usr=0.08%, sys=0.55%, ctx=441, majf=0, minf=4097 00:26:45.687 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:45.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.687 issued rwts: total=2121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.687 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.687 job8: (groupid=0, jobs=1): err= 0: pid=302024: Mon Nov 18 20:27:56 2024 00:26:45.687 read: IOPS=379, BW=94.8MiB/s (99.4MB/s)(956MiB/10090msec) 00:26:45.687 slat (usec): min=8, max=251298, avg=1915.68, stdev=9000.55 00:26:45.687 clat (usec): min=1449, max=700711, avg=166790.68, stdev=109537.33 00:26:45.687 lat (usec): min=1505, max=700729, avg=168706.36, stdev=110843.25 00:26:45.687 clat percentiles (msec): 00:26:45.687 | 1.00th=[ 6], 5.00th=[ 46], 10.00th=[ 53], 20.00th=[ 56], 00:26:45.687 | 30.00th=[ 64], 40.00th=[ 121], 50.00th=[ 157], 60.00th=[ 180], 00:26:45.687 | 70.00th=[ 228], 80.00th=[ 268], 90.00th=[ 321], 95.00th=[ 372], 00:26:45.687 | 99.00th=[ 422], 99.50th=[ 456], 99.90th=[ 617], 99.95th=[ 701], 00:26:45.687 | 99.99th=[ 701] 00:26:45.687 bw ( KiB/s): min=48640, max=281600, per=13.12%, avg=96301.05, stdev=64006.85, samples=20 00:26:45.688 iops : min= 190, max= 1100, avg=376.15, stdev=250.04, samples=20 00:26:45.688 lat (msec) : 2=0.05%, 4=0.47%, 10=1.36%, 20=1.25%, 50=4.21% 00:26:45.688 lat (msec) : 100=28.71%, 250=39.90%, 500=23.76%, 750=0.29% 00:26:45.688 cpu : usr=0.20%, sys=1.14%, ctx=803, majf=0, minf=3721 00:26:45.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:45.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.688 issued rwts: total=3825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.688 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.688 job9: (groupid=0, jobs=1): err= 0: pid=302025: Mon Nov 18 20:27:56 2024 00:26:45.688 read: IOPS=260, BW=65.1MiB/s (68.3MB/s)(659MiB/10125msec) 00:26:45.688 slat (usec): min=8, max=367558, avg=3199.92, stdev=14124.30 00:26:45.688 clat (usec): min=738, max=923545, avg=242341.41, stdev=164375.64 00:26:45.688 lat (usec): min=757, max=923567, avg=245541.33, stdev=166433.75 00:26:45.688 clat percentiles (msec): 00:26:45.688 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 56], 20.00th=[ 77], 00:26:45.688 | 30.00th=[ 102], 40.00th=[ 184], 50.00th=[ 253], 60.00th=[ 284], 00:26:45.688 | 70.00th=[ 330], 80.00th=[ 380], 90.00th=[ 430], 95.00th=[ 498], 00:26:45.688 | 99.00th=[ 802], 99.50th=[ 835], 99.90th=[ 919], 99.95th=[ 919], 00:26:45.688 | 99.99th=[ 927] 00:26:45.688 bw ( KiB/s): min=14848, max=160256, per=8.98%, avg=65883.60, stdev=38715.23, samples=20 00:26:45.688 iops : min= 58, max= 626, avg=257.35, stdev=151.22, samples=20 00:26:45.688 lat (usec) : 750=0.04%, 1000=0.11% 00:26:45.688 lat (msec) : 4=0.15%, 10=1.02%, 20=4.13%, 50=3.45%, 100=20.59% 00:26:45.688 lat (msec) : 250=19.91%, 500=45.77%, 750=3.53%, 1000=1.29% 00:26:45.688 cpu : usr=0.09%, sys=0.95%, ctx=714, majf=0, minf=4097 00:26:45.688 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:45.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.688 issued rwts: total=2637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.688 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.688 job10: (groupid=0, jobs=1): err= 0: pid=302028: Mon Nov 18 20:27:56 2024 00:26:45.688 read: IOPS=204, BW=51.0MiB/s (53.5MB/s)(519MiB/10168msec) 00:26:45.688 slat (usec): min=8, max=342327, avg=4655.09, stdev=20949.89 00:26:45.688 clat (msec): min=29, max=773, avg=308.75, stdev=160.00 00:26:45.688 lat (msec): min=29, max=773, avg=313.40, stdev=162.32 00:26:45.688 clat percentiles (msec): 00:26:45.688 | 1.00th=[ 36], 5.00th=[ 58], 10.00th=[ 65], 20.00th=[ 155], 00:26:45.688 | 30.00th=[ 245], 40.00th=[ 279], 50.00th=[ 309], 60.00th=[ 342], 00:26:45.688 | 70.00th=[ 384], 80.00th=[ 439], 90.00th=[ 527], 95.00th=[ 575], 00:26:45.688 | 99.00th=[ 693], 99.50th=[ 709], 99.90th=[ 776], 99.95th=[ 776], 00:26:45.688 | 99.99th=[ 776] 00:26:45.688 bw ( KiB/s): min=20480, max=155136, per=7.01%, avg=51481.30, stdev=30792.36, samples=20 00:26:45.688 iops : min= 80, max= 606, avg=201.05, stdev=120.31, samples=20 00:26:45.688 lat (msec) : 50=3.23%, 100=12.48%, 250=14.55%, 500=56.77%, 750=12.63% 00:26:45.688 lat (msec) : 1000=0.34% 00:26:45.688 cpu : usr=0.05%, sys=0.64%, ctx=234, majf=0, minf=4097 00:26:45.688 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:45.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.688 issued rwts: total=2075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.688 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.688 00:26:45.688 Run status group 0 (all jobs): 00:26:45.688 READ: bw=717MiB/s (752MB/s), 40.0MiB/s-102MiB/s (42.0MB/s-106MB/s), io=7320MiB (7675MB), run=10089-10211msec 00:26:45.688 00:26:45.688 Disk stats (read/write): 00:26:45.688 nvme0n1: ios=3606/0, merge=0/0, ticks=1218466/0, in_queue=1218466, util=97.11% 00:26:45.688 nvme10n1: ios=5514/0, merge=0/0, ticks=1230091/0, in_queue=1230091, util=97.32% 00:26:45.688 nvme1n1: ios=5715/0, merge=0/0, ticks=1230530/0, in_queue=1230530, util=97.63% 00:26:45.688 nvme2n1: ios=3269/0, merge=0/0, ticks=1275682/0, in_queue=1275682, util=97.83% 00:26:45.688 nvme3n1: ios=6826/0, merge=0/0, ticks=1209227/0, in_queue=1209227, util=97.85% 00:26:45.688 nvme4n1: ios=8071/0, merge=0/0, ticks=1230951/0, in_queue=1230951, util=98.18% 00:26:45.688 nvme5n1: ios=3321/0, merge=0/0, ticks=1202563/0, in_queue=1202563, util=98.36% 00:26:45.688 nvme6n1: ios=4068/0, merge=0/0, ticks=1231624/0, in_queue=1231624, util=98.48% 00:26:45.688 nvme7n1: ios=7491/0, merge=0/0, ticks=1230486/0, in_queue=1230486, util=98.92% 00:26:45.688 nvme8n1: ios=5118/0, merge=0/0, ticks=1225185/0, in_queue=1225185, util=99.10% 00:26:45.688 nvme9n1: ios=3994/0, merge=0/0, ticks=1199140/0, in_queue=1199140, util=99.23% 00:26:45.688 20:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:45.688 [global] 00:26:45.688 thread=1 00:26:45.688 invalidate=1 00:26:45.688 rw=randwrite 00:26:45.688 time_based=1 00:26:45.688 runtime=10 00:26:45.688 ioengine=libaio 00:26:45.688 direct=1 00:26:45.688 bs=262144 00:26:45.688 iodepth=64 00:26:45.688 norandommap=1 00:26:45.688 numjobs=1 00:26:45.688 00:26:45.688 [job0] 00:26:45.688 filename=/dev/nvme0n1 00:26:45.688 [job1] 00:26:45.688 filename=/dev/nvme10n1 00:26:45.688 [job2] 00:26:45.688 filename=/dev/nvme1n1 00:26:45.688 [job3] 00:26:45.688 filename=/dev/nvme2n1 00:26:45.688 [job4] 00:26:45.688 filename=/dev/nvme3n1 00:26:45.688 [job5] 00:26:45.688 filename=/dev/nvme4n1 00:26:45.688 [job6] 00:26:45.688 filename=/dev/nvme5n1 00:26:45.688 [job7] 00:26:45.688 filename=/dev/nvme6n1 00:26:45.688 [job8] 00:26:45.688 filename=/dev/nvme7n1 00:26:45.688 [job9] 00:26:45.688 filename=/dev/nvme8n1 00:26:45.688 [job10] 00:26:45.688 filename=/dev/nvme9n1 00:26:45.688 Could not set queue depth (nvme0n1) 00:26:45.688 Could not set queue depth (nvme10n1) 00:26:45.688 Could not set queue depth (nvme1n1) 00:26:45.688 Could not set queue depth (nvme2n1) 00:26:45.688 Could not set queue depth (nvme3n1) 00:26:45.688 Could not set queue depth (nvme4n1) 00:26:45.688 Could not set queue depth (nvme5n1) 00:26:45.688 Could not set queue depth (nvme6n1) 00:26:45.688 Could not set queue depth (nvme7n1) 00:26:45.688 Could not set queue depth (nvme8n1) 00:26:45.688 Could not set queue depth (nvme9n1) 00:26:45.688 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:45.688 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:45.688 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:45.688 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:45.688 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:45.688 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:45.688 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:45.688 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:45.688 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:45.688 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:45.688 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:45.688 fio-3.35 00:26:45.688 Starting 11 threads 00:26:55.663 00:26:55.663 job0: (groupid=0, jobs=1): err= 0: pid=302756: Mon Nov 18 20:28:07 2024 00:26:55.663 write: IOPS=154, BW=38.6MiB/s (40.4MB/s)(398MiB/10306msec); 0 zone resets 00:26:55.663 slat (usec): min=26, max=47865, avg=5367.13, stdev=11553.49 00:26:55.663 clat (msec): min=14, max=952, avg=409.09, stdev=153.41 00:26:55.663 lat (msec): min=14, max=952, avg=414.46, stdev=155.52 00:26:55.663 clat percentiles (msec): 00:26:55.663 | 1.00th=[ 67], 5.00th=[ 142], 10.00th=[ 192], 20.00th=[ 247], 00:26:55.663 | 30.00th=[ 330], 40.00th=[ 372], 50.00th=[ 426], 60.00th=[ 485], 00:26:55.663 | 70.00th=[ 523], 80.00th=[ 550], 90.00th=[ 584], 95.00th=[ 600], 00:26:55.663 | 99.00th=[ 651], 99.50th=[ 877], 99.90th=[ 953], 99.95th=[ 953], 00:26:55.663 | 99.99th=[ 953] 00:26:55.663 bw ( KiB/s): min=26624, max=80032, per=4.15%, avg=39099.20, stdev=14170.68, samples=20 00:26:55.663 iops : min= 104, max= 312, avg=152.70, stdev=55.26, samples=20 00:26:55.663 lat (msec) : 20=0.06%, 50=0.44%, 100=1.64%, 250=18.68%, 500=42.01% 00:26:55.663 lat (msec) : 750=36.48%, 1000=0.69% 00:26:55.663 cpu : usr=0.60%, sys=0.42%, ctx=562, majf=0, minf=1 00:26:55.663 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:26:55.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.663 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.663 issued rwts: total=0,1590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.663 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.663 job1: (groupid=0, jobs=1): err= 0: pid=302771: Mon Nov 18 20:28:07 2024 00:26:55.663 write: IOPS=564, BW=141MiB/s (148MB/s)(1455MiB/10312msec); 0 zone resets 00:26:55.663 slat (usec): min=22, max=203363, avg=1320.51, stdev=5129.97 00:26:55.663 clat (usec): min=1135, max=1032.5k, avg=111959.70, stdev=125822.20 00:26:55.663 lat (usec): min=1164, max=1032.6k, avg=113280.21, stdev=126900.92 00:26:55.663 clat percentiles (msec): 00:26:55.663 | 1.00th=[ 10], 5.00th=[ 34], 10.00th=[ 44], 20.00th=[ 47], 00:26:55.663 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 70], 00:26:55.663 | 70.00th=[ 117], 80.00th=[ 161], 90.00th=[ 234], 95.00th=[ 351], 00:26:55.663 | 99.00th=[ 667], 99.50th=[ 684], 99.90th=[ 726], 99.95th=[ 1036], 00:26:55.663 | 99.99th=[ 1036] 00:26:55.663 bw ( KiB/s): min=25600, max=333312, per=15.63%, avg=147404.80, stdev=110780.96, samples=20 00:26:55.663 iops : min= 100, max= 1302, avg=575.80, stdev=432.74, samples=20 00:26:55.663 lat (msec) : 2=0.17%, 4=0.21%, 10=0.67%, 20=1.67%, 50=37.48% 00:26:55.663 lat (msec) : 100=28.17%, 250=22.83%, 500=5.21%, 750=3.52%, 1000=0.02% 00:26:55.663 lat (msec) : 2000=0.05% 00:26:55.663 cpu : usr=1.69%, sys=2.04%, ctx=2179, majf=0, minf=1 00:26:55.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:26:55.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.663 issued rwts: total=0,5821,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.663 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.663 job2: (groupid=0, jobs=1): err= 0: pid=302772: Mon Nov 18 20:28:07 2024 00:26:55.663 write: IOPS=273, BW=68.4MiB/s (71.7MB/s)(702MiB/10260msec); 0 zone resets 00:26:55.663 slat (usec): min=15, max=146017, avg=2395.58, stdev=9202.97 00:26:55.663 clat (usec): min=639, max=979608, avg=231246.55, stdev=212644.40 00:26:55.663 lat (usec): min=679, max=979681, avg=233642.13, stdev=214944.99 00:26:55.663 clat percentiles (msec): 00:26:55.663 | 1.00th=[ 3], 5.00th=[ 15], 10.00th=[ 24], 20.00th=[ 43], 00:26:55.663 | 30.00th=[ 83], 40.00th=[ 106], 50.00th=[ 153], 60.00th=[ 205], 00:26:55.663 | 70.00th=[ 338], 80.00th=[ 414], 90.00th=[ 600], 95.00th=[ 651], 00:26:55.663 | 99.00th=[ 709], 99.50th=[ 852], 99.90th=[ 969], 99.95th=[ 978], 00:26:55.663 | 99.99th=[ 978] 00:26:55.663 bw ( KiB/s): min=22528, max=158720, per=7.45%, avg=70272.00, stdev=47394.22, samples=20 00:26:55.663 iops : min= 88, max= 620, avg=274.50, stdev=185.13, samples=20 00:26:55.663 lat (usec) : 750=0.14%, 1000=0.32% 00:26:55.663 lat (msec) : 2=0.43%, 4=0.96%, 10=1.67%, 20=5.16%, 50=14.67% 00:26:55.663 lat (msec) : 100=12.25%, 250=27.67%, 500=21.12%, 750=14.78%, 1000=0.82% 00:26:55.663 cpu : usr=0.85%, sys=0.76%, ctx=1714, majf=0, minf=1 00:26:55.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:55.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.663 issued rwts: total=0,2808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.663 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.663 job3: (groupid=0, jobs=1): err= 0: pid=302773: Mon Nov 18 20:28:07 2024 00:26:55.663 write: IOPS=222, BW=55.7MiB/s (58.4MB/s)(563MiB/10094msec); 0 zone resets 00:26:55.663 slat (usec): min=20, max=116445, avg=3436.20, stdev=9208.19 00:26:55.663 clat (msec): min=9, max=628, avg=283.55, stdev=178.73 00:26:55.663 lat (msec): min=9, max=636, avg=286.99, stdev=181.16 00:26:55.663 clat percentiles (msec): 00:26:55.663 | 1.00th=[ 15], 5.00th=[ 37], 10.00th=[ 66], 20.00th=[ 136], 00:26:55.663 | 30.00th=[ 155], 40.00th=[ 178], 50.00th=[ 228], 60.00th=[ 305], 00:26:55.663 | 70.00th=[ 418], 80.00th=[ 498], 90.00th=[ 550], 95.00th=[ 575], 00:26:55.663 | 99.00th=[ 600], 99.50th=[ 617], 99.90th=[ 617], 99.95th=[ 625], 00:26:55.663 | 99.99th=[ 625] 00:26:55.663 bw ( KiB/s): min=25088, max=132608, per=5.94%, avg=55993.75, stdev=27847.43, samples=20 00:26:55.663 iops : min= 98, max= 518, avg=218.70, stdev=108.77, samples=20 00:26:55.663 lat (msec) : 10=0.04%, 20=2.84%, 50=4.53%, 100=6.58%, 250=38.84% 00:26:55.663 lat (msec) : 500=27.60%, 750=19.56% 00:26:55.663 cpu : usr=0.63%, sys=0.76%, ctx=1102, majf=0, minf=1 00:26:55.663 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:55.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.663 issued rwts: total=0,2250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.663 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.663 job4: (groupid=0, jobs=1): err= 0: pid=302774: Mon Nov 18 20:28:07 2024 00:26:55.663 write: IOPS=384, BW=96.0MiB/s (101MB/s)(993MiB/10336msec); 0 zone resets 00:26:55.663 slat (usec): min=14, max=52815, avg=1622.21, stdev=6222.43 00:26:55.663 clat (usec): min=642, max=1001.4k, avg=164877.75, stdev=180861.02 00:26:55.663 lat (usec): min=677, max=1001.4k, avg=166499.96, stdev=182831.21 00:26:55.663 clat percentiles (usec): 00:26:55.663 | 1.00th=[ 955], 5.00th=[ 3359], 10.00th=[ 8979], 00:26:55.663 | 20.00th=[ 42206], 30.00th=[ 61604], 40.00th=[ 65274], 00:26:55.663 | 50.00th=[ 77071], 60.00th=[ 121111], 70.00th=[ 183501], 00:26:55.663 | 80.00th=[ 267387], 90.00th=[ 505414], 95.00th=[ 566232], 00:26:55.663 | 99.00th=[ 624952], 99.50th=[ 809501], 99.90th=[ 968885], 00:26:55.663 | 99.95th=[ 968885], 99.99th=[1002439] 00:26:55.663 bw ( KiB/s): min=28672, max=332288, per=10.60%, avg=100004.90, stdev=86634.45, samples=20 00:26:55.663 iops : min= 112, max= 1298, avg=390.60, stdev=338.41, samples=20 00:26:55.663 lat (usec) : 750=0.33%, 1000=0.76% 00:26:55.663 lat (msec) : 2=1.86%, 4=2.82%, 10=5.04%, 20=3.40%, 50=9.75% 00:26:55.663 lat (msec) : 100=34.56%, 250=20.18%, 500=10.76%, 750=9.92%, 1000=0.60% 00:26:55.663 lat (msec) : 2000=0.03% 00:26:55.663 cpu : usr=1.18%, sys=1.12%, ctx=2397, majf=0, minf=1 00:26:55.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:55.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.663 issued rwts: total=0,3970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.663 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.663 job5: (groupid=0, jobs=1): err= 0: pid=302775: Mon Nov 18 20:28:07 2024 00:26:55.663 write: IOPS=249, BW=62.3MiB/s (65.3MB/s)(644MiB/10330msec); 0 zone resets 00:26:55.663 slat (usec): min=18, max=83516, avg=3368.65, stdev=9212.97 00:26:55.663 clat (msec): min=7, max=993, avg=253.32, stdev=208.44 00:26:55.663 lat (msec): min=7, max=993, avg=256.69, stdev=211.32 00:26:55.663 clat percentiles (msec): 00:26:55.663 | 1.00th=[ 26], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 70], 00:26:55.663 | 30.00th=[ 91], 40.00th=[ 111], 50.00th=[ 136], 60.00th=[ 234], 00:26:55.663 | 70.00th=[ 355], 80.00th=[ 514], 90.00th=[ 567], 95.00th=[ 592], 00:26:55.663 | 99.00th=[ 751], 99.50th=[ 877], 99.90th=[ 953], 99.95th=[ 995], 00:26:55.663 | 99.99th=[ 995] 00:26:55.663 bw ( KiB/s): min=22528, max=223232, per=6.81%, avg=64253.10, stdev=54738.37, samples=20 00:26:55.663 iops : min= 88, max= 872, avg=250.95, stdev=213.85, samples=20 00:26:55.663 lat (msec) : 10=0.16%, 20=0.51%, 50=2.29%, 100=31.04%, 250=28.75% 00:26:55.663 lat (msec) : 500=14.84%, 750=21.41%, 1000=1.01% 00:26:55.663 cpu : usr=0.74%, sys=0.84%, ctx=1015, majf=0, minf=1 00:26:55.663 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:55.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.663 issued rwts: total=0,2574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.663 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.663 job6: (groupid=0, jobs=1): err= 0: pid=302776: Mon Nov 18 20:28:07 2024 00:26:55.663 write: IOPS=184, BW=46.1MiB/s (48.4MB/s)(477MiB/10333msec); 0 zone resets 00:26:55.664 slat (usec): min=14, max=109292, avg=4311.13, stdev=11085.27 00:26:55.664 clat (usec): min=1000, max=1007.9k, avg=342405.22, stdev=208646.07 00:26:55.664 lat (usec): min=1066, max=1008.0k, avg=346716.36, stdev=211642.06 00:26:55.664 clat percentiles (usec): 00:26:55.664 | 1.00th=[ 1844], 5.00th=[ 3654], 10.00th=[ 21627], 00:26:55.664 | 20.00th=[ 98042], 30.00th=[ 227541], 40.00th=[ 299893], 00:26:55.664 | 50.00th=[ 350225], 60.00th=[ 417334], 70.00th=[ 501220], 00:26:55.664 | 80.00th=[ 541066], 90.00th=[ 583009], 95.00th=[ 599786], 00:26:55.664 | 99.00th=[ 809501], 99.50th=[ 935330], 99.90th=[1010828], 00:26:55.664 | 99.95th=[1010828], 99.99th=[1010828] 00:26:55.664 bw ( KiB/s): min=23040, max=155136, per=5.00%, avg=47185.75, stdev=31453.74, samples=20 00:26:55.664 iops : min= 90, max= 606, avg=184.30, stdev=122.86, samples=20 00:26:55.664 lat (msec) : 2=1.05%, 4=4.88%, 10=0.84%, 20=2.62%, 50=6.77% 00:26:55.664 lat (msec) : 100=4.04%, 250=12.28%, 500=37.04%, 750=29.12%, 1000=1.26% 00:26:55.664 lat (msec) : 2000=0.10% 00:26:55.664 cpu : usr=0.57%, sys=0.63%, ctx=984, majf=0, minf=2 00:26:55.664 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:55.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.664 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.664 issued rwts: total=0,1906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.664 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.664 job7: (groupid=0, jobs=1): err= 0: pid=302777: Mon Nov 18 20:28:07 2024 00:26:55.664 write: IOPS=441, BW=110MiB/s (116MB/s)(1141MiB/10331msec); 0 zone resets 00:26:55.664 slat (usec): min=15, max=97726, avg=1743.59, stdev=6118.04 00:26:55.664 clat (usec): min=1002, max=1016.6k, avg=142990.59, stdev=166409.70 00:26:55.664 lat (usec): min=1688, max=1053.0k, avg=144734.18, stdev=168547.36 00:26:55.664 clat percentiles (msec): 00:26:55.664 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 15], 20.00th=[ 30], 00:26:55.664 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 99], 00:26:55.664 | 70.00th=[ 132], 80.00th=[ 228], 90.00th=[ 409], 95.00th=[ 523], 00:26:55.664 | 99.00th=[ 667], 99.50th=[ 751], 99.90th=[ 978], 99.95th=[ 1020], 00:26:55.664 | 99.99th=[ 1020] 00:26:55.664 bw ( KiB/s): min=28672, max=304128, per=12.22%, avg=115222.85, stdev=92876.75, samples=20 00:26:55.664 iops : min= 112, max= 1188, avg=450.05, stdev=362.73, samples=20 00:26:55.664 lat (msec) : 2=0.11%, 4=1.03%, 10=4.45%, 20=9.27%, 50=12.29% 00:26:55.664 lat (msec) : 100=33.14%, 250=21.27%, 500=12.42%, 750=5.48%, 1000=0.46% 00:26:55.664 lat (msec) : 2000=0.09% 00:26:55.664 cpu : usr=1.29%, sys=1.40%, ctx=2479, majf=0, minf=1 00:26:55.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:55.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.664 issued rwts: total=0,4565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.664 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.664 job8: (groupid=0, jobs=1): err= 0: pid=302778: Mon Nov 18 20:28:07 2024 00:26:55.664 write: IOPS=471, BW=118MiB/s (123MB/s)(1188MiB/10089msec); 0 zone resets 00:26:55.664 slat (usec): min=16, max=151558, avg=1539.11, stdev=5530.45 00:26:55.664 clat (usec): min=801, max=602718, avg=134285.35, stdev=94589.09 00:26:55.664 lat (usec): min=885, max=611654, avg=135824.46, stdev=95489.87 00:26:55.664 clat percentiles (msec): 00:26:55.664 | 1.00th=[ 7], 5.00th=[ 28], 10.00th=[ 54], 20.00th=[ 61], 00:26:55.664 | 30.00th=[ 75], 40.00th=[ 102], 50.00th=[ 123], 60.00th=[ 134], 00:26:55.664 | 70.00th=[ 150], 80.00th=[ 180], 90.00th=[ 243], 95.00th=[ 326], 00:26:55.664 | 99.00th=[ 531], 99.50th=[ 575], 99.90th=[ 600], 99.95th=[ 600], 00:26:55.664 | 99.99th=[ 600] 00:26:55.664 bw ( KiB/s): min=32768, max=236032, per=12.73%, avg=120024.30, stdev=53533.99, samples=20 00:26:55.664 iops : min= 128, max= 922, avg=468.80, stdev=209.12, samples=20 00:26:55.664 lat (usec) : 1000=0.04% 00:26:55.664 lat (msec) : 2=0.19%, 4=0.38%, 10=1.01%, 20=1.39%, 50=6.17% 00:26:55.664 lat (msec) : 100=30.30%, 250=51.56%, 500=7.66%, 750=1.30% 00:26:55.664 cpu : usr=1.40%, sys=1.42%, ctx=2200, majf=0, minf=1 00:26:55.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:55.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.664 issued rwts: total=0,4752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.664 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.664 job9: (groupid=0, jobs=1): err= 0: pid=302779: Mon Nov 18 20:28:07 2024 00:26:55.664 write: IOPS=377, BW=94.3MiB/s (98.8MB/s)(975MiB/10338msec); 0 zone resets 00:26:55.664 slat (usec): min=21, max=77877, avg=2157.94, stdev=7000.29 00:26:55.664 clat (msec): min=2, max=990, avg=167.44, stdev=185.01 00:26:55.664 lat (msec): min=2, max=990, avg=169.60, stdev=187.43 00:26:55.664 clat percentiles (msec): 00:26:55.664 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 46], 00:26:55.664 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 67], 60.00th=[ 103], 00:26:55.664 | 70.00th=[ 133], 80.00th=[ 342], 90.00th=[ 510], 95.00th=[ 575], 00:26:55.664 | 99.00th=[ 659], 99.50th=[ 793], 99.90th=[ 953], 99.95th=[ 995], 00:26:55.664 | 99.99th=[ 995] 00:26:55.664 bw ( KiB/s): min=22528, max=346112, per=10.41%, avg=98157.35, stdev=104484.72, samples=20 00:26:55.664 iops : min= 88, max= 1352, avg=383.40, stdev=408.15, samples=20 00:26:55.664 lat (msec) : 4=0.08%, 10=0.10%, 20=0.15%, 50=43.61%, 100=14.03% 00:26:55.664 lat (msec) : 250=17.80%, 500=13.60%, 750=10.06%, 1000=0.56% 00:26:55.664 cpu : usr=1.37%, sys=1.06%, ctx=1337, majf=0, minf=1 00:26:55.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:55.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.664 issued rwts: total=0,3898,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.664 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.664 job10: (groupid=0, jobs=1): err= 0: pid=302780: Mon Nov 18 20:28:07 2024 00:26:55.664 write: IOPS=382, BW=95.6MiB/s (100MB/s)(987MiB/10327msec); 0 zone resets 00:26:55.664 slat (usec): min=17, max=113925, avg=1511.67, stdev=6685.08 00:26:55.664 clat (usec): min=880, max=674354, avg=165773.93, stdev=189472.30 00:26:55.664 lat (usec): min=937, max=674417, avg=167285.60, stdev=191325.36 00:26:55.664 clat percentiles (msec): 00:26:55.664 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 11], 20.00th=[ 20], 00:26:55.664 | 30.00th=[ 30], 40.00th=[ 47], 50.00th=[ 61], 60.00th=[ 112], 00:26:55.664 | 70.00th=[ 222], 80.00th=[ 338], 90.00th=[ 502], 95.00th=[ 575], 00:26:55.664 | 99.00th=[ 659], 99.50th=[ 667], 99.90th=[ 676], 99.95th=[ 676], 00:26:55.664 | 99.99th=[ 676] 00:26:55.664 bw ( KiB/s): min=26624, max=290304, per=10.55%, avg=99452.95, stdev=84912.26, samples=20 00:26:55.664 iops : min= 104, max= 1134, avg=388.45, stdev=331.72, samples=20 00:26:55.664 lat (usec) : 1000=0.08% 00:26:55.664 lat (msec) : 2=0.68%, 4=3.24%, 10=5.83%, 20=10.64%, 50=22.80% 00:26:55.664 lat (msec) : 100=14.61%, 250=15.65%, 500=16.46%, 750=10.01% 00:26:55.664 cpu : usr=1.02%, sys=1.41%, ctx=2912, majf=0, minf=1 00:26:55.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:55.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.664 issued rwts: total=0,3948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.664 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.664 00:26:55.664 Run status group 0 (all jobs): 00:26:55.664 WRITE: bw=921MiB/s (966MB/s), 38.6MiB/s-141MiB/s (40.4MB/s-148MB/s), io=9521MiB (9983MB), run=10089-10338msec 00:26:55.664 00:26:55.664 Disk stats (read/write): 00:26:55.664 nvme0n1: ios=43/3122, merge=0/0, ticks=868/1230486, in_queue=1231354, util=100.00% 00:26:55.664 nvme10n1: ios=39/11580, merge=0/0, ticks=2026/1214844, in_queue=1216870, util=100.00% 00:26:55.664 nvme1n1: ios=43/5583, merge=0/0, ticks=2434/1225907, in_queue=1228341, util=100.00% 00:26:55.664 nvme2n1: ios=15/4315, merge=0/0, ticks=97/1214324, in_queue=1214421, util=97.91% 00:26:55.664 nvme3n1: ios=15/7861, merge=0/0, ticks=97/1230971, in_queue=1231068, util=97.99% 00:26:55.664 nvme4n1: ios=0/5068, merge=0/0, ticks=0/1216170, in_queue=1216170, util=98.20% 00:26:55.664 nvme5n1: ios=0/3736, merge=0/0, ticks=0/1219582, in_queue=1219582, util=98.41% 00:26:55.664 nvme6n1: ios=45/9049, merge=0/0, ticks=214/1220469, in_queue=1220683, util=99.93% 00:26:55.664 nvme7n1: ios=46/9309, merge=0/0, ticks=3398/1198785, in_queue=1202183, util=100.00% 00:26:55.664 nvme8n1: ios=43/7714, merge=0/0, ticks=113/1218241, in_queue=1218354, util=99.97% 00:26:55.664 nvme9n1: ios=24/7820, merge=0/0, ticks=761/1236539, in_queue=1237300, util=100.00% 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:55.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.664 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:55.664 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:55.665 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:55.665 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:55.665 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:55.665 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:55.665 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:55.665 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:55.665 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:55.665 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:55.665 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.665 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.924 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.924 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.924 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:56.183 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:56.183 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:56.183 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:56.183 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:56.183 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:56.183 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:56.183 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:56.183 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:56.183 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:56.183 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.183 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:56.183 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.183 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:56.183 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:56.441 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:56.441 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:56.441 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:56.441 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:56.441 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:56.441 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:56.441 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:56.441 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:56.441 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:56.441 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.441 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:56.441 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.441 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:56.441 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:56.699 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:56.700 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:56.700 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:56.700 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:56.700 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:56.700 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:56.700 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:56.700 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:56.700 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:56.700 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.700 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:56.700 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.700 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:56.700 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:56.959 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:56.959 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:56.959 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:57.217 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:57.217 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:57.217 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:57.217 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:57.217 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.217 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:57.217 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.217 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.217 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:57.217 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:57.217 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:57.217 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:57.217 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:57.217 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:57.217 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:57.217 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:57.217 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:57.217 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:57.217 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.217 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:57.217 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.217 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.217 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:57.477 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:57.477 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:57.477 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:57.477 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:57.477 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:57.477 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:57.477 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:57.477 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:57.477 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:57.477 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.477 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:57.477 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.477 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.477 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:57.736 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:57.736 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:57.736 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:57.737 rmmod nvme_tcp 00:26:57.737 rmmod nvme_fabrics 00:26:57.737 rmmod nvme_keyring 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 297688 ']' 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 297688 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 297688 ']' 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 297688 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 297688 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 297688' 00:26:57.737 killing process with pid 297688 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 297688 00:26:57.737 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 297688 00:26:58.304 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:58.304 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:58.304 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:58.304 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:58.304 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:58.304 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:58.304 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:58.304 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:58.304 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:58.304 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.304 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.304 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:00.840 00:27:00.840 real 1m1.212s 00:27:00.840 user 3m34.725s 00:27:00.840 sys 0m15.980s 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.840 ************************************ 00:27:00.840 END TEST nvmf_multiconnection 00:27:00.840 ************************************ 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:00.840 ************************************ 00:27:00.840 START TEST nvmf_initiator_timeout 00:27:00.840 ************************************ 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:00.840 * Looking for test storage... 00:27:00.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:00.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.840 --rc genhtml_branch_coverage=1 00:27:00.840 --rc genhtml_function_coverage=1 00:27:00.840 --rc genhtml_legend=1 00:27:00.840 --rc geninfo_all_blocks=1 00:27:00.840 --rc geninfo_unexecuted_blocks=1 00:27:00.840 00:27:00.840 ' 00:27:00.840 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:00.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.840 --rc genhtml_branch_coverage=1 00:27:00.840 --rc genhtml_function_coverage=1 00:27:00.840 --rc genhtml_legend=1 00:27:00.840 --rc geninfo_all_blocks=1 00:27:00.840 --rc geninfo_unexecuted_blocks=1 00:27:00.840 00:27:00.840 ' 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:00.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.841 --rc genhtml_branch_coverage=1 00:27:00.841 --rc genhtml_function_coverage=1 00:27:00.841 --rc genhtml_legend=1 00:27:00.841 --rc geninfo_all_blocks=1 00:27:00.841 --rc geninfo_unexecuted_blocks=1 00:27:00.841 00:27:00.841 ' 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:00.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.841 --rc genhtml_branch_coverage=1 00:27:00.841 --rc genhtml_function_coverage=1 00:27:00.841 --rc genhtml_legend=1 00:27:00.841 --rc geninfo_all_blocks=1 00:27:00.841 --rc geninfo_unexecuted_blocks=1 00:27:00.841 00:27:00.841 ' 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:00.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:00.841 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:02.744 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:02.744 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:02.744 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:02.744 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:02.744 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:02.745 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:03.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:27:03.003 00:27:03.003 --- 10.0.0.2 ping statistics --- 00:27:03.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.003 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:27:03.003 00:27:03.003 --- 10.0.0.1 ping statistics --- 00:27:03.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.003 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=305970 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 305970 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 305970 ']' 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:03.003 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.003 [2024-11-18 20:28:14.847727] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:27:03.003 [2024-11-18 20:28:14.847820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.003 [2024-11-18 20:28:14.922235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:03.003 [2024-11-18 20:28:14.968241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.003 [2024-11-18 20:28:14.968296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.003 [2024-11-18 20:28:14.968324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.003 [2024-11-18 20:28:14.968335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.003 [2024-11-18 20:28:14.968354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.003 [2024-11-18 20:28:14.970030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.003 [2024-11-18 20:28:14.970051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:03.003 [2024-11-18 20:28:14.970106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:03.003 [2024-11-18 20:28:14.970109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.262 Malloc0 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.262 Delay0 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.262 [2024-11-18 20:28:15.165249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.262 [2024-11-18 20:28:15.193528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.262 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:04.272 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:04.272 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:27:04.272 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:04.272 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:04.272 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:27:06.269 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:06.269 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:06.269 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:06.269 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:06.269 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:06.269 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:27:06.269 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=306406 00:27:06.270 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:06.270 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:06.270 [global] 00:27:06.270 thread=1 00:27:06.270 invalidate=1 00:27:06.270 rw=write 00:27:06.270 time_based=1 00:27:06.270 runtime=60 00:27:06.270 ioengine=libaio 00:27:06.270 direct=1 00:27:06.270 bs=4096 00:27:06.270 iodepth=1 00:27:06.270 norandommap=0 00:27:06.270 numjobs=1 00:27:06.270 00:27:06.270 verify_dump=1 00:27:06.270 verify_backlog=512 00:27:06.270 verify_state_save=0 00:27:06.270 do_verify=1 00:27:06.270 verify=crc32c-intel 00:27:06.270 [job0] 00:27:06.270 filename=/dev/nvme0n1 00:27:06.270 Could not set queue depth (nvme0n1) 00:27:06.270 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:06.270 fio-3.35 00:27:06.270 Starting 1 thread 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.560 true 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.560 true 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.560 true 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.560 true 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.560 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.093 true 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.093 true 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.093 true 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.093 true 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:12.093 20:28:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 306406 00:28:08.341 00:28:08.341 job0: (groupid=0, jobs=1): err= 0: pid=306478: Mon Nov 18 20:29:18 2024 00:28:08.341 read: IOPS=7, BW=30.7KiB/s (31.4kB/s)(1840KiB/60020msec) 00:28:08.341 slat (usec): min=6, max=6865, avg=37.64, stdev=319.17 00:28:08.341 clat (usec): min=324, max=41241k, avg=130124.66, stdev=1920969.18 00:28:08.341 lat (usec): min=342, max=41241k, avg=130162.30, stdev=1920968.32 00:28:08.341 clat percentiles (usec): 00:28:08.341 | 1.00th=[ 383], 5.00th=[ 40633], 10.00th=[ 41157], 00:28:08.341 | 20.00th=[ 41157], 30.00th=[ 41157], 40.00th=[ 41157], 00:28:08.341 | 50.00th=[ 41157], 60.00th=[ 41157], 70.00th=[ 41681], 00:28:08.341 | 80.00th=[ 42206], 90.00th=[ 42206], 95.00th=[ 42206], 00:28:08.341 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[17112761], 00:28:08.341 | 99.95th=[17112761], 99.99th=[17112761] 00:28:08.341 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60020msec); 0 zone resets 00:28:08.341 slat (usec): min=7, max=26862, avg=65.82, stdev=1186.60 00:28:08.341 clat (usec): min=146, max=648, avg=204.88, stdev=28.47 00:28:08.341 lat (usec): min=178, max=27130, avg=270.70, stdev=1189.74 00:28:08.341 clat percentiles (usec): 00:28:08.341 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:28:08.341 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 206], 00:28:08.341 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 231], 00:28:08.341 | 99.00th=[ 281], 99.50th=[ 375], 99.90th=[ 652], 99.95th=[ 652], 00:28:08.341 | 99.99th=[ 652] 00:28:08.341 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:28:08.341 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:28:08.341 lat (usec) : 250=51.54%, 500=1.85%, 750=0.10% 00:28:08.341 lat (msec) : 50=46.40%, >=2000=0.10% 00:28:08.341 cpu : usr=0.02%, sys=0.03%, ctx=978, majf=0, minf=1 00:28:08.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.341 issued rwts: total=460,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:08.341 00:28:08.341 Run status group 0 (all jobs): 00:28:08.342 READ: bw=30.7KiB/s (31.4kB/s), 30.7KiB/s-30.7KiB/s (31.4kB/s-31.4kB/s), io=1840KiB (1884kB), run=60020-60020msec 00:28:08.342 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60020-60020msec 00:28:08.342 00:28:08.342 Disk stats (read/write): 00:28:08.342 nvme0n1: ios=509/512, merge=0/0, ticks=20020/103, in_queue=20123, util=99.90% 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:08.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:08.342 nvmf hotplug test: fio successful as expected 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:08.342 rmmod nvme_tcp 00:28:08.342 rmmod nvme_fabrics 00:28:08.342 rmmod nvme_keyring 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 305970 ']' 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 305970 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 305970 ']' 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 305970 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305970 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305970' 00:28:08.342 killing process with pid 305970 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 305970 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 305970 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:08.342 20:29:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.910 20:29:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:08.910 00:28:08.910 real 1m8.465s 00:28:08.910 user 4m11.319s 00:28:08.910 sys 0m6.528s 00:28:08.910 20:29:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:08.910 20:29:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:08.910 ************************************ 00:28:08.910 END TEST nvmf_initiator_timeout 00:28:08.910 ************************************ 00:28:08.910 20:29:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:08.910 20:29:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:08.910 20:29:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:08.910 20:29:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:08.910 20:29:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:11.449 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:11.449 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:11.449 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:11.449 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:11.449 ************************************ 00:28:11.449 START TEST nvmf_perf_adq 00:28:11.449 ************************************ 00:28:11.449 20:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:11.449 * Looking for test storage... 00:28:11.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:11.449 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:11.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.450 --rc genhtml_branch_coverage=1 00:28:11.450 --rc genhtml_function_coverage=1 00:28:11.450 --rc genhtml_legend=1 00:28:11.450 --rc geninfo_all_blocks=1 00:28:11.450 --rc geninfo_unexecuted_blocks=1 00:28:11.450 00:28:11.450 ' 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:11.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.450 --rc genhtml_branch_coverage=1 00:28:11.450 --rc genhtml_function_coverage=1 00:28:11.450 --rc genhtml_legend=1 00:28:11.450 --rc geninfo_all_blocks=1 00:28:11.450 --rc geninfo_unexecuted_blocks=1 00:28:11.450 00:28:11.450 ' 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:11.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.450 --rc genhtml_branch_coverage=1 00:28:11.450 --rc genhtml_function_coverage=1 00:28:11.450 --rc genhtml_legend=1 00:28:11.450 --rc geninfo_all_blocks=1 00:28:11.450 --rc geninfo_unexecuted_blocks=1 00:28:11.450 00:28:11.450 ' 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:11.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.450 --rc genhtml_branch_coverage=1 00:28:11.450 --rc genhtml_function_coverage=1 00:28:11.450 --rc genhtml_legend=1 00:28:11.450 --rc geninfo_all_blocks=1 00:28:11.450 --rc geninfo_unexecuted_blocks=1 00:28:11.450 00:28:11.450 ' 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:11.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:11.450 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:13.356 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.356 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:13.357 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:13.357 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:13.357 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:13.357 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:13.926 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:18.116 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:23.396 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:23.396 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:23.396 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:23.396 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:23.396 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:23.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:28:23.397 00:28:23.397 --- 10.0.0.2 ping statistics --- 00:28:23.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.397 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:23.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:28:23.397 00:28:23.397 --- 10.0.0.1 ping statistics --- 00:28:23.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.397 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=318259 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 318259 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 318259 ']' 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.397 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.397 [2024-11-18 20:29:34.809894] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:23.397 [2024-11-18 20:29:34.809996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.397 [2024-11-18 20:29:34.881095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:23.397 [2024-11-18 20:29:34.926958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.397 [2024-11-18 20:29:34.927010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.397 [2024-11-18 20:29:34.927038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.397 [2024-11-18 20:29:34.927050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.397 [2024-11-18 20:29:34.927060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.397 [2024-11-18 20:29:34.928657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.397 [2024-11-18 20:29:34.928710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:23.397 [2024-11-18 20:29:34.928776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:23.397 [2024-11-18 20:29:34.928779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.397 [2024-11-18 20:29:35.223090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.397 Malloc1 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.397 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.398 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:23.398 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.398 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.398 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.398 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:23.398 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.398 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.398 [2024-11-18 20:29:35.288348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.398 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.398 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=318341 00:28:23.398 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:23.398 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:25.303 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:25.303 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.303 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:25.561 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.561 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:25.561 "tick_rate": 2700000000, 00:28:25.561 "poll_groups": [ 00:28:25.561 { 00:28:25.561 "name": "nvmf_tgt_poll_group_000", 00:28:25.561 "admin_qpairs": 1, 00:28:25.561 "io_qpairs": 1, 00:28:25.561 "current_admin_qpairs": 1, 00:28:25.561 "current_io_qpairs": 1, 00:28:25.561 "pending_bdev_io": 0, 00:28:25.561 "completed_nvme_io": 19511, 00:28:25.561 "transports": [ 00:28:25.561 { 00:28:25.561 "trtype": "TCP" 00:28:25.561 } 00:28:25.561 ] 00:28:25.561 }, 00:28:25.561 { 00:28:25.561 "name": "nvmf_tgt_poll_group_001", 00:28:25.561 "admin_qpairs": 0, 00:28:25.561 "io_qpairs": 1, 00:28:25.561 "current_admin_qpairs": 0, 00:28:25.561 "current_io_qpairs": 1, 00:28:25.561 "pending_bdev_io": 0, 00:28:25.561 "completed_nvme_io": 19786, 00:28:25.561 "transports": [ 00:28:25.561 { 00:28:25.561 "trtype": "TCP" 00:28:25.561 } 00:28:25.561 ] 00:28:25.561 }, 00:28:25.561 { 00:28:25.561 "name": "nvmf_tgt_poll_group_002", 00:28:25.561 "admin_qpairs": 0, 00:28:25.561 "io_qpairs": 1, 00:28:25.561 "current_admin_qpairs": 0, 00:28:25.561 "current_io_qpairs": 1, 00:28:25.561 "pending_bdev_io": 0, 00:28:25.561 "completed_nvme_io": 19529, 00:28:25.561 "transports": [ 00:28:25.561 { 00:28:25.561 "trtype": "TCP" 00:28:25.561 } 00:28:25.561 ] 00:28:25.561 }, 00:28:25.561 { 00:28:25.561 "name": "nvmf_tgt_poll_group_003", 00:28:25.561 "admin_qpairs": 0, 00:28:25.561 "io_qpairs": 1, 00:28:25.561 "current_admin_qpairs": 0, 00:28:25.561 "current_io_qpairs": 1, 00:28:25.561 "pending_bdev_io": 0, 00:28:25.561 "completed_nvme_io": 19687, 00:28:25.561 "transports": [ 00:28:25.561 { 00:28:25.561 "trtype": "TCP" 00:28:25.561 } 00:28:25.561 ] 00:28:25.561 } 00:28:25.561 ] 00:28:25.561 }' 00:28:25.561 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:25.561 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:25.561 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:25.561 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:25.561 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 318341 00:28:33.683 Initializing NVMe Controllers 00:28:33.683 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:33.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:33.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:33.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:33.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:33.683 Initialization complete. Launching workers. 00:28:33.683 ======================================================== 00:28:33.683 Latency(us) 00:28:33.683 Device Information : IOPS MiB/s Average min max 00:28:33.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10304.78 40.25 6211.13 2329.15 10441.01 00:28:33.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10561.27 41.25 6062.29 2496.97 9983.65 00:28:33.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10447.47 40.81 6125.97 2293.38 10577.73 00:28:33.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10451.87 40.83 6125.74 2195.21 10780.34 00:28:33.683 ======================================================== 00:28:33.683 Total : 41765.39 163.15 6130.82 2195.21 10780.34 00:28:33.683 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:33.683 rmmod nvme_tcp 00:28:33.683 rmmod nvme_fabrics 00:28:33.683 rmmod nvme_keyring 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 318259 ']' 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 318259 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 318259 ']' 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 318259 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 318259 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 318259' 00:28:33.683 killing process with pid 318259 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 318259 00:28:33.683 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 318259 00:28:33.942 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:33.942 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:33.942 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:33.942 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:33.942 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:33.942 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:33.942 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:33.942 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:33.942 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:33.942 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.942 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.942 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.851 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:35.851 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:35.851 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:35.851 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:36.788 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:39.345 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:44.616 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.616 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:44.617 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:44.617 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:44.617 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:44.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:28:44.617 00:28:44.617 --- 10.0.0.2 ping statistics --- 00:28:44.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.617 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:44.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:28:44.617 00:28:44.617 --- 10.0.0.1 ping statistics --- 00:28:44.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.617 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:44.617 net.core.busy_poll = 1 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:44.617 net.core.busy_read = 1 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:44.617 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=321022 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 321022 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 321022 ']' 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.617 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.617 [2024-11-18 20:29:56.118759] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:44.618 [2024-11-18 20:29:56.118846] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.618 [2024-11-18 20:29:56.190481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:44.618 [2024-11-18 20:29:56.236824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.618 [2024-11-18 20:29:56.236876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.618 [2024-11-18 20:29:56.236905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:44.618 [2024-11-18 20:29:56.236927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:44.618 [2024-11-18 20:29:56.236936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.618 [2024-11-18 20:29:56.238380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.618 [2024-11-18 20:29:56.238443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.618 [2024-11-18 20:29:56.238471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.618 [2024-11-18 20:29:56.238473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.618 [2024-11-18 20:29:56.528662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.618 Malloc1 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.618 [2024-11-18 20:29:56.591725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=321053 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:44.618 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:47.151 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:47.151 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.151 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.151 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.151 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:47.151 "tick_rate": 2700000000, 00:28:47.151 "poll_groups": [ 00:28:47.151 { 00:28:47.151 "name": "nvmf_tgt_poll_group_000", 00:28:47.151 "admin_qpairs": 1, 00:28:47.151 "io_qpairs": 2, 00:28:47.151 "current_admin_qpairs": 1, 00:28:47.151 "current_io_qpairs": 2, 00:28:47.151 "pending_bdev_io": 0, 00:28:47.151 "completed_nvme_io": 26517, 00:28:47.151 "transports": [ 00:28:47.151 { 00:28:47.151 "trtype": "TCP" 00:28:47.151 } 00:28:47.151 ] 00:28:47.151 }, 00:28:47.151 { 00:28:47.151 "name": "nvmf_tgt_poll_group_001", 00:28:47.151 "admin_qpairs": 0, 00:28:47.151 "io_qpairs": 2, 00:28:47.151 "current_admin_qpairs": 0, 00:28:47.151 "current_io_qpairs": 2, 00:28:47.151 "pending_bdev_io": 0, 00:28:47.151 "completed_nvme_io": 26388, 00:28:47.151 "transports": [ 00:28:47.151 { 00:28:47.151 "trtype": "TCP" 00:28:47.151 } 00:28:47.151 ] 00:28:47.151 }, 00:28:47.151 { 00:28:47.151 "name": "nvmf_tgt_poll_group_002", 00:28:47.151 "admin_qpairs": 0, 00:28:47.151 "io_qpairs": 0, 00:28:47.151 "current_admin_qpairs": 0, 00:28:47.151 "current_io_qpairs": 0, 00:28:47.151 "pending_bdev_io": 0, 00:28:47.151 "completed_nvme_io": 0, 00:28:47.151 "transports": [ 00:28:47.151 { 00:28:47.151 "trtype": "TCP" 00:28:47.151 } 00:28:47.151 ] 00:28:47.151 }, 00:28:47.151 { 00:28:47.151 "name": "nvmf_tgt_poll_group_003", 00:28:47.151 "admin_qpairs": 0, 00:28:47.151 "io_qpairs": 0, 00:28:47.151 "current_admin_qpairs": 0, 00:28:47.151 "current_io_qpairs": 0, 00:28:47.151 "pending_bdev_io": 0, 00:28:47.151 "completed_nvme_io": 0, 00:28:47.151 "transports": [ 00:28:47.151 { 00:28:47.151 "trtype": "TCP" 00:28:47.151 } 00:28:47.151 ] 00:28:47.151 } 00:28:47.151 ] 00:28:47.151 }' 00:28:47.151 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:47.151 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:47.151 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:47.151 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:47.152 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 321053 00:28:55.263 Initializing NVMe Controllers 00:28:55.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:55.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:55.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:55.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:55.263 Initialization complete. Launching workers. 00:28:55.263 ======================================================== 00:28:55.263 Latency(us) 00:28:55.263 Device Information : IOPS MiB/s Average min max 00:28:55.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6634.40 25.92 9675.58 1810.18 54218.14 00:28:55.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6277.30 24.52 10197.01 2003.60 54806.14 00:28:55.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6978.10 27.26 9190.68 1487.17 53777.48 00:28:55.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7206.40 28.15 8882.63 1789.66 54055.76 00:28:55.263 ======================================================== 00:28:55.263 Total : 27096.19 105.84 9460.61 1487.17 54806.14 00:28:55.263 00:28:55.263 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:55.263 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:55.264 rmmod nvme_tcp 00:28:55.264 rmmod nvme_fabrics 00:28:55.264 rmmod nvme_keyring 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 321022 ']' 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 321022 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 321022 ']' 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 321022 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 321022 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 321022' 00:28:55.264 killing process with pid 321022 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 321022 00:28:55.264 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 321022 00:28:55.264 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:55.264 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:55.264 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:55.264 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:55.264 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:55.264 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:55.264 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:55.264 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:55.264 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:55.264 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.264 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.264 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:58.556 00:28:58.556 real 0m47.116s 00:28:58.556 user 2m40.757s 00:28:58.556 sys 0m10.368s 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:58.556 ************************************ 00:28:58.556 END TEST nvmf_perf_adq 00:28:58.556 ************************************ 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:58.556 ************************************ 00:28:58.556 START TEST nvmf_shutdown 00:28:58.556 ************************************ 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:58.556 * Looking for test storage... 00:28:58.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.556 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:58.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.557 --rc genhtml_branch_coverage=1 00:28:58.557 --rc genhtml_function_coverage=1 00:28:58.557 --rc genhtml_legend=1 00:28:58.557 --rc geninfo_all_blocks=1 00:28:58.557 --rc geninfo_unexecuted_blocks=1 00:28:58.557 00:28:58.557 ' 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:58.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.557 --rc genhtml_branch_coverage=1 00:28:58.557 --rc genhtml_function_coverage=1 00:28:58.557 --rc genhtml_legend=1 00:28:58.557 --rc geninfo_all_blocks=1 00:28:58.557 --rc geninfo_unexecuted_blocks=1 00:28:58.557 00:28:58.557 ' 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:58.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.557 --rc genhtml_branch_coverage=1 00:28:58.557 --rc genhtml_function_coverage=1 00:28:58.557 --rc genhtml_legend=1 00:28:58.557 --rc geninfo_all_blocks=1 00:28:58.557 --rc geninfo_unexecuted_blocks=1 00:28:58.557 00:28:58.557 ' 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:58.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.557 --rc genhtml_branch_coverage=1 00:28:58.557 --rc genhtml_function_coverage=1 00:28:58.557 --rc genhtml_legend=1 00:28:58.557 --rc geninfo_all_blocks=1 00:28:58.557 --rc geninfo_unexecuted_blocks=1 00:28:58.557 00:28:58.557 ' 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:58.557 ************************************ 00:28:58.557 START TEST nvmf_shutdown_tc1 00:28:58.557 ************************************ 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.557 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:58.558 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:58.558 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:58.558 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.558 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.558 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.558 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:58.558 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:58.558 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.558 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.091 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:01.092 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:01.092 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:01.092 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:01.092 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:01.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:29:01.092 00:29:01.092 --- 10.0.0.2 ping statistics --- 00:29:01.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.092 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:01.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:29:01.092 00:29:01.092 --- 10.0.0.1 ping statistics --- 00:29:01.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.092 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=324986 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 324986 00:29:01.092 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 324986 ']' 00:29:01.093 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.093 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.093 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.093 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.093 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:01.093 [2024-11-18 20:30:12.757022] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:01.093 [2024-11-18 20:30:12.757092] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.093 [2024-11-18 20:30:12.830401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:01.093 [2024-11-18 20:30:12.879353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.093 [2024-11-18 20:30:12.879404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.093 [2024-11-18 20:30:12.879427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.093 [2024-11-18 20:30:12.879438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.093 [2024-11-18 20:30:12.879448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.093 [2024-11-18 20:30:12.882658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:01.093 [2024-11-18 20:30:12.882745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:01.093 [2024-11-18 20:30:12.882804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:01.093 [2024-11-18 20:30:12.882808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.093 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.093 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:01.093 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:01.093 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.093 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:01.093 [2024-11-18 20:30:13.020841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.093 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:01.093 Malloc1 00:29:01.351 [2024-11-18 20:30:13.110584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.351 Malloc2 00:29:01.351 Malloc3 00:29:01.351 Malloc4 00:29:01.351 Malloc5 00:29:01.351 Malloc6 00:29:01.610 Malloc7 00:29:01.610 Malloc8 00:29:01.610 Malloc9 00:29:01.610 Malloc10 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=325162 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 325162 /var/tmp/bdevperf.sock 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 325162 ']' 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:01.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.610 { 00:29:01.610 "params": { 00:29:01.610 "name": "Nvme$subsystem", 00:29:01.610 "trtype": "$TEST_TRANSPORT", 00:29:01.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.610 "adrfam": "ipv4", 00:29:01.610 "trsvcid": "$NVMF_PORT", 00:29:01.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.610 "hdgst": ${hdgst:-false}, 00:29:01.610 "ddgst": ${ddgst:-false} 00:29:01.610 }, 00:29:01.610 "method": "bdev_nvme_attach_controller" 00:29:01.610 } 00:29:01.610 EOF 00:29:01.610 )") 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.610 { 00:29:01.610 "params": { 00:29:01.610 "name": "Nvme$subsystem", 00:29:01.610 "trtype": "$TEST_TRANSPORT", 00:29:01.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.610 "adrfam": "ipv4", 00:29:01.610 "trsvcid": "$NVMF_PORT", 00:29:01.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.610 "hdgst": ${hdgst:-false}, 00:29:01.610 "ddgst": ${ddgst:-false} 00:29:01.610 }, 00:29:01.610 "method": "bdev_nvme_attach_controller" 00:29:01.610 } 00:29:01.610 EOF 00:29:01.610 )") 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.610 { 00:29:01.610 "params": { 00:29:01.610 "name": "Nvme$subsystem", 00:29:01.610 "trtype": "$TEST_TRANSPORT", 00:29:01.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.610 "adrfam": "ipv4", 00:29:01.610 "trsvcid": "$NVMF_PORT", 00:29:01.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.610 "hdgst": ${hdgst:-false}, 00:29:01.610 "ddgst": ${ddgst:-false} 00:29:01.610 }, 00:29:01.610 "method": "bdev_nvme_attach_controller" 00:29:01.610 } 00:29:01.610 EOF 00:29:01.610 )") 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.610 { 00:29:01.610 "params": { 00:29:01.610 "name": "Nvme$subsystem", 00:29:01.610 "trtype": "$TEST_TRANSPORT", 00:29:01.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.610 "adrfam": "ipv4", 00:29:01.610 "trsvcid": "$NVMF_PORT", 00:29:01.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.610 "hdgst": ${hdgst:-false}, 00:29:01.610 "ddgst": ${ddgst:-false} 00:29:01.610 }, 00:29:01.610 "method": "bdev_nvme_attach_controller" 00:29:01.610 } 00:29:01.610 EOF 00:29:01.610 )") 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.610 { 00:29:01.610 "params": { 00:29:01.610 "name": "Nvme$subsystem", 00:29:01.610 "trtype": "$TEST_TRANSPORT", 00:29:01.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.610 "adrfam": "ipv4", 00:29:01.610 "trsvcid": "$NVMF_PORT", 00:29:01.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.610 "hdgst": ${hdgst:-false}, 00:29:01.610 "ddgst": ${ddgst:-false} 00:29:01.610 }, 00:29:01.610 "method": "bdev_nvme_attach_controller" 00:29:01.610 } 00:29:01.610 EOF 00:29:01.610 )") 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.610 { 00:29:01.610 "params": { 00:29:01.610 "name": "Nvme$subsystem", 00:29:01.610 "trtype": "$TEST_TRANSPORT", 00:29:01.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.610 "adrfam": "ipv4", 00:29:01.610 "trsvcid": "$NVMF_PORT", 00:29:01.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.610 "hdgst": ${hdgst:-false}, 00:29:01.610 "ddgst": ${ddgst:-false} 00:29:01.610 }, 00:29:01.610 "method": "bdev_nvme_attach_controller" 00:29:01.610 } 00:29:01.610 EOF 00:29:01.610 )") 00:29:01.610 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.611 { 00:29:01.611 "params": { 00:29:01.611 "name": "Nvme$subsystem", 00:29:01.611 "trtype": "$TEST_TRANSPORT", 00:29:01.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.611 "adrfam": "ipv4", 00:29:01.611 "trsvcid": "$NVMF_PORT", 00:29:01.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.611 "hdgst": ${hdgst:-false}, 00:29:01.611 "ddgst": ${ddgst:-false} 00:29:01.611 }, 00:29:01.611 "method": "bdev_nvme_attach_controller" 00:29:01.611 } 00:29:01.611 EOF 00:29:01.611 )") 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.611 { 00:29:01.611 "params": { 00:29:01.611 "name": "Nvme$subsystem", 00:29:01.611 "trtype": "$TEST_TRANSPORT", 00:29:01.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.611 "adrfam": "ipv4", 00:29:01.611 "trsvcid": "$NVMF_PORT", 00:29:01.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.611 "hdgst": ${hdgst:-false}, 00:29:01.611 "ddgst": ${ddgst:-false} 00:29:01.611 }, 00:29:01.611 "method": "bdev_nvme_attach_controller" 00:29:01.611 } 00:29:01.611 EOF 00:29:01.611 )") 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.611 { 00:29:01.611 "params": { 00:29:01.611 "name": "Nvme$subsystem", 00:29:01.611 "trtype": "$TEST_TRANSPORT", 00:29:01.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.611 "adrfam": "ipv4", 00:29:01.611 "trsvcid": "$NVMF_PORT", 00:29:01.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.611 "hdgst": ${hdgst:-false}, 00:29:01.611 "ddgst": ${ddgst:-false} 00:29:01.611 }, 00:29:01.611 "method": "bdev_nvme_attach_controller" 00:29:01.611 } 00:29:01.611 EOF 00:29:01.611 )") 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.611 { 00:29:01.611 "params": { 00:29:01.611 "name": "Nvme$subsystem", 00:29:01.611 "trtype": "$TEST_TRANSPORT", 00:29:01.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.611 "adrfam": "ipv4", 00:29:01.611 "trsvcid": "$NVMF_PORT", 00:29:01.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.611 "hdgst": ${hdgst:-false}, 00:29:01.611 "ddgst": ${ddgst:-false} 00:29:01.611 }, 00:29:01.611 "method": "bdev_nvme_attach_controller" 00:29:01.611 } 00:29:01.611 EOF 00:29:01.611 )") 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:01.611 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:01.611 "params": { 00:29:01.611 "name": "Nvme1", 00:29:01.611 "trtype": "tcp", 00:29:01.611 "traddr": "10.0.0.2", 00:29:01.611 "adrfam": "ipv4", 00:29:01.611 "trsvcid": "4420", 00:29:01.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:01.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:01.611 "hdgst": false, 00:29:01.611 "ddgst": false 00:29:01.611 }, 00:29:01.611 "method": "bdev_nvme_attach_controller" 00:29:01.611 },{ 00:29:01.611 "params": { 00:29:01.611 "name": "Nvme2", 00:29:01.611 "trtype": "tcp", 00:29:01.611 "traddr": "10.0.0.2", 00:29:01.611 "adrfam": "ipv4", 00:29:01.611 "trsvcid": "4420", 00:29:01.611 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:01.611 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:01.611 "hdgst": false, 00:29:01.611 "ddgst": false 00:29:01.611 }, 00:29:01.611 "method": "bdev_nvme_attach_controller" 00:29:01.611 },{ 00:29:01.611 "params": { 00:29:01.611 "name": "Nvme3", 00:29:01.611 "trtype": "tcp", 00:29:01.611 "traddr": "10.0.0.2", 00:29:01.611 "adrfam": "ipv4", 00:29:01.611 "trsvcid": "4420", 00:29:01.611 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:01.611 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:01.611 "hdgst": false, 00:29:01.611 "ddgst": false 00:29:01.611 }, 00:29:01.611 "method": "bdev_nvme_attach_controller" 00:29:01.611 },{ 00:29:01.611 "params": { 00:29:01.611 "name": "Nvme4", 00:29:01.611 "trtype": "tcp", 00:29:01.611 "traddr": "10.0.0.2", 00:29:01.611 "adrfam": "ipv4", 00:29:01.611 "trsvcid": "4420", 00:29:01.611 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:01.611 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:01.611 "hdgst": false, 00:29:01.611 "ddgst": false 00:29:01.611 }, 00:29:01.611 "method": "bdev_nvme_attach_controller" 00:29:01.611 },{ 00:29:01.611 "params": { 00:29:01.611 "name": "Nvme5", 00:29:01.611 "trtype": "tcp", 00:29:01.611 "traddr": "10.0.0.2", 00:29:01.611 "adrfam": "ipv4", 00:29:01.611 "trsvcid": "4420", 00:29:01.611 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:01.611 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:01.611 "hdgst": false, 00:29:01.611 "ddgst": false 00:29:01.611 }, 00:29:01.611 "method": "bdev_nvme_attach_controller" 00:29:01.611 },{ 00:29:01.611 "params": { 00:29:01.611 "name": "Nvme6", 00:29:01.611 "trtype": "tcp", 00:29:01.611 "traddr": "10.0.0.2", 00:29:01.611 "adrfam": "ipv4", 00:29:01.611 "trsvcid": "4420", 00:29:01.611 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:01.611 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:01.611 "hdgst": false, 00:29:01.611 "ddgst": false 00:29:01.611 }, 00:29:01.611 "method": "bdev_nvme_attach_controller" 00:29:01.611 },{ 00:29:01.611 "params": { 00:29:01.611 "name": "Nvme7", 00:29:01.611 "trtype": "tcp", 00:29:01.611 "traddr": "10.0.0.2", 00:29:01.611 "adrfam": "ipv4", 00:29:01.611 "trsvcid": "4420", 00:29:01.611 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:01.611 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:01.611 "hdgst": false, 00:29:01.611 "ddgst": false 00:29:01.611 }, 00:29:01.611 "method": "bdev_nvme_attach_controller" 00:29:01.611 },{ 00:29:01.611 "params": { 00:29:01.611 "name": "Nvme8", 00:29:01.611 "trtype": "tcp", 00:29:01.611 "traddr": "10.0.0.2", 00:29:01.611 "adrfam": "ipv4", 00:29:01.611 "trsvcid": "4420", 00:29:01.611 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:01.611 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:01.611 "hdgst": false, 00:29:01.611 "ddgst": false 00:29:01.611 }, 00:29:01.611 "method": "bdev_nvme_attach_controller" 00:29:01.611 },{ 00:29:01.611 "params": { 00:29:01.611 "name": "Nvme9", 00:29:01.611 "trtype": "tcp", 00:29:01.611 "traddr": "10.0.0.2", 00:29:01.611 "adrfam": "ipv4", 00:29:01.611 "trsvcid": "4420", 00:29:01.611 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:01.611 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:01.611 "hdgst": false, 00:29:01.611 "ddgst": false 00:29:01.611 }, 00:29:01.611 "method": "bdev_nvme_attach_controller" 00:29:01.611 },{ 00:29:01.611 "params": { 00:29:01.611 "name": "Nvme10", 00:29:01.611 "trtype": "tcp", 00:29:01.611 "traddr": "10.0.0.2", 00:29:01.611 "adrfam": "ipv4", 00:29:01.611 "trsvcid": "4420", 00:29:01.611 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:01.611 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:01.611 "hdgst": false, 00:29:01.611 "ddgst": false 00:29:01.611 }, 00:29:01.611 "method": "bdev_nvme_attach_controller" 00:29:01.611 }' 00:29:01.871 [2024-11-18 20:30:13.621591] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:01.871 [2024-11-18 20:30:13.621690] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:01.871 [2024-11-18 20:30:13.695016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.871 [2024-11-18 20:30:13.742283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.775 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.775 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:03.775 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:03.775 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.775 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:03.775 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.775 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 325162 00:29:03.775 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:03.775 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:04.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 325162 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 324986 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.712 { 00:29:04.712 "params": { 00:29:04.712 "name": "Nvme$subsystem", 00:29:04.712 "trtype": "$TEST_TRANSPORT", 00:29:04.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.712 "adrfam": "ipv4", 00:29:04.712 "trsvcid": "$NVMF_PORT", 00:29:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.712 "hdgst": ${hdgst:-false}, 00:29:04.712 "ddgst": ${ddgst:-false} 00:29:04.712 }, 00:29:04.712 "method": "bdev_nvme_attach_controller" 00:29:04.712 } 00:29:04.712 EOF 00:29:04.712 )") 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.712 { 00:29:04.712 "params": { 00:29:04.712 "name": "Nvme$subsystem", 00:29:04.712 "trtype": "$TEST_TRANSPORT", 00:29:04.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.712 "adrfam": "ipv4", 00:29:04.712 "trsvcid": "$NVMF_PORT", 00:29:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.712 "hdgst": ${hdgst:-false}, 00:29:04.712 "ddgst": ${ddgst:-false} 00:29:04.712 }, 00:29:04.712 "method": "bdev_nvme_attach_controller" 00:29:04.712 } 00:29:04.712 EOF 00:29:04.712 )") 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.712 { 00:29:04.712 "params": { 00:29:04.712 "name": "Nvme$subsystem", 00:29:04.712 "trtype": "$TEST_TRANSPORT", 00:29:04.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.712 "adrfam": "ipv4", 00:29:04.712 "trsvcid": "$NVMF_PORT", 00:29:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.712 "hdgst": ${hdgst:-false}, 00:29:04.712 "ddgst": ${ddgst:-false} 00:29:04.712 }, 00:29:04.712 "method": "bdev_nvme_attach_controller" 00:29:04.712 } 00:29:04.712 EOF 00:29:04.712 )") 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.712 { 00:29:04.712 "params": { 00:29:04.712 "name": "Nvme$subsystem", 00:29:04.712 "trtype": "$TEST_TRANSPORT", 00:29:04.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.712 "adrfam": "ipv4", 00:29:04.712 "trsvcid": "$NVMF_PORT", 00:29:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.712 "hdgst": ${hdgst:-false}, 00:29:04.712 "ddgst": ${ddgst:-false} 00:29:04.712 }, 00:29:04.712 "method": "bdev_nvme_attach_controller" 00:29:04.712 } 00:29:04.712 EOF 00:29:04.712 )") 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.712 { 00:29:04.712 "params": { 00:29:04.712 "name": "Nvme$subsystem", 00:29:04.712 "trtype": "$TEST_TRANSPORT", 00:29:04.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.712 "adrfam": "ipv4", 00:29:04.712 "trsvcid": "$NVMF_PORT", 00:29:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.712 "hdgst": ${hdgst:-false}, 00:29:04.712 "ddgst": ${ddgst:-false} 00:29:04.712 }, 00:29:04.712 "method": "bdev_nvme_attach_controller" 00:29:04.712 } 00:29:04.712 EOF 00:29:04.712 )") 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.712 { 00:29:04.712 "params": { 00:29:04.712 "name": "Nvme$subsystem", 00:29:04.712 "trtype": "$TEST_TRANSPORT", 00:29:04.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.712 "adrfam": "ipv4", 00:29:04.712 "trsvcid": "$NVMF_PORT", 00:29:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.712 "hdgst": ${hdgst:-false}, 00:29:04.712 "ddgst": ${ddgst:-false} 00:29:04.712 }, 00:29:04.712 "method": "bdev_nvme_attach_controller" 00:29:04.712 } 00:29:04.712 EOF 00:29:04.712 )") 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.712 { 00:29:04.712 "params": { 00:29:04.712 "name": "Nvme$subsystem", 00:29:04.712 "trtype": "$TEST_TRANSPORT", 00:29:04.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.712 "adrfam": "ipv4", 00:29:04.712 "trsvcid": "$NVMF_PORT", 00:29:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.712 "hdgst": ${hdgst:-false}, 00:29:04.712 "ddgst": ${ddgst:-false} 00:29:04.712 }, 00:29:04.712 "method": "bdev_nvme_attach_controller" 00:29:04.712 } 00:29:04.712 EOF 00:29:04.712 )") 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.712 { 00:29:04.712 "params": { 00:29:04.712 "name": "Nvme$subsystem", 00:29:04.712 "trtype": "$TEST_TRANSPORT", 00:29:04.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.712 "adrfam": "ipv4", 00:29:04.712 "trsvcid": "$NVMF_PORT", 00:29:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.712 "hdgst": ${hdgst:-false}, 00:29:04.712 "ddgst": ${ddgst:-false} 00:29:04.712 }, 00:29:04.712 "method": "bdev_nvme_attach_controller" 00:29:04.712 } 00:29:04.712 EOF 00:29:04.712 )") 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.712 { 00:29:04.712 "params": { 00:29:04.712 "name": "Nvme$subsystem", 00:29:04.712 "trtype": "$TEST_TRANSPORT", 00:29:04.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.712 "adrfam": "ipv4", 00:29:04.712 "trsvcid": "$NVMF_PORT", 00:29:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.712 "hdgst": ${hdgst:-false}, 00:29:04.712 "ddgst": ${ddgst:-false} 00:29:04.712 }, 00:29:04.712 "method": "bdev_nvme_attach_controller" 00:29:04.712 } 00:29:04.712 EOF 00:29:04.712 )") 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.712 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.713 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.713 { 00:29:04.713 "params": { 00:29:04.713 "name": "Nvme$subsystem", 00:29:04.713 "trtype": "$TEST_TRANSPORT", 00:29:04.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.713 "adrfam": "ipv4", 00:29:04.713 "trsvcid": "$NVMF_PORT", 00:29:04.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.713 "hdgst": ${hdgst:-false}, 00:29:04.713 "ddgst": ${ddgst:-false} 00:29:04.713 }, 00:29:04.713 "method": "bdev_nvme_attach_controller" 00:29:04.713 } 00:29:04.713 EOF 00:29:04.713 )") 00:29:04.713 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.713 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:04.713 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:04.713 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:04.713 "params": { 00:29:04.713 "name": "Nvme1", 00:29:04.713 "trtype": "tcp", 00:29:04.713 "traddr": "10.0.0.2", 00:29:04.713 "adrfam": "ipv4", 00:29:04.713 "trsvcid": "4420", 00:29:04.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:04.713 "hdgst": false, 00:29:04.713 "ddgst": false 00:29:04.713 }, 00:29:04.713 "method": "bdev_nvme_attach_controller" 00:29:04.713 },{ 00:29:04.713 "params": { 00:29:04.713 "name": "Nvme2", 00:29:04.713 "trtype": "tcp", 00:29:04.713 "traddr": "10.0.0.2", 00:29:04.713 "adrfam": "ipv4", 00:29:04.713 "trsvcid": "4420", 00:29:04.713 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:04.713 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:04.713 "hdgst": false, 00:29:04.713 "ddgst": false 00:29:04.713 }, 00:29:04.713 "method": "bdev_nvme_attach_controller" 00:29:04.713 },{ 00:29:04.713 "params": { 00:29:04.713 "name": "Nvme3", 00:29:04.713 "trtype": "tcp", 00:29:04.713 "traddr": "10.0.0.2", 00:29:04.713 "adrfam": "ipv4", 00:29:04.713 "trsvcid": "4420", 00:29:04.713 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:04.713 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:04.713 "hdgst": false, 00:29:04.713 "ddgst": false 00:29:04.713 }, 00:29:04.713 "method": "bdev_nvme_attach_controller" 00:29:04.713 },{ 00:29:04.713 "params": { 00:29:04.713 "name": "Nvme4", 00:29:04.713 "trtype": "tcp", 00:29:04.713 "traddr": "10.0.0.2", 00:29:04.713 "adrfam": "ipv4", 00:29:04.713 "trsvcid": "4420", 00:29:04.713 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:04.713 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:04.713 "hdgst": false, 00:29:04.713 "ddgst": false 00:29:04.713 }, 00:29:04.713 "method": "bdev_nvme_attach_controller" 00:29:04.713 },{ 00:29:04.713 "params": { 00:29:04.713 "name": "Nvme5", 00:29:04.713 "trtype": "tcp", 00:29:04.713 "traddr": "10.0.0.2", 00:29:04.713 "adrfam": "ipv4", 00:29:04.713 "trsvcid": "4420", 00:29:04.713 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:04.713 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:04.713 "hdgst": false, 00:29:04.713 "ddgst": false 00:29:04.713 }, 00:29:04.713 "method": "bdev_nvme_attach_controller" 00:29:04.713 },{ 00:29:04.713 "params": { 00:29:04.713 "name": "Nvme6", 00:29:04.713 "trtype": "tcp", 00:29:04.713 "traddr": "10.0.0.2", 00:29:04.713 "adrfam": "ipv4", 00:29:04.713 "trsvcid": "4420", 00:29:04.713 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:04.713 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:04.713 "hdgst": false, 00:29:04.713 "ddgst": false 00:29:04.713 }, 00:29:04.713 "method": "bdev_nvme_attach_controller" 00:29:04.713 },{ 00:29:04.713 "params": { 00:29:04.713 "name": "Nvme7", 00:29:04.713 "trtype": "tcp", 00:29:04.713 "traddr": "10.0.0.2", 00:29:04.713 "adrfam": "ipv4", 00:29:04.713 "trsvcid": "4420", 00:29:04.713 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:04.713 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:04.713 "hdgst": false, 00:29:04.713 "ddgst": false 00:29:04.713 }, 00:29:04.713 "method": "bdev_nvme_attach_controller" 00:29:04.713 },{ 00:29:04.713 "params": { 00:29:04.713 "name": "Nvme8", 00:29:04.713 "trtype": "tcp", 00:29:04.713 "traddr": "10.0.0.2", 00:29:04.713 "adrfam": "ipv4", 00:29:04.713 "trsvcid": "4420", 00:29:04.713 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:04.713 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:04.713 "hdgst": false, 00:29:04.713 "ddgst": false 00:29:04.713 }, 00:29:04.713 "method": "bdev_nvme_attach_controller" 00:29:04.713 },{ 00:29:04.713 "params": { 00:29:04.713 "name": "Nvme9", 00:29:04.713 "trtype": "tcp", 00:29:04.713 "traddr": "10.0.0.2", 00:29:04.713 "adrfam": "ipv4", 00:29:04.713 "trsvcid": "4420", 00:29:04.713 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:04.713 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:04.713 "hdgst": false, 00:29:04.713 "ddgst": false 00:29:04.713 }, 00:29:04.713 "method": "bdev_nvme_attach_controller" 00:29:04.713 },{ 00:29:04.713 "params": { 00:29:04.713 "name": "Nvme10", 00:29:04.713 "trtype": "tcp", 00:29:04.713 "traddr": "10.0.0.2", 00:29:04.713 "adrfam": "ipv4", 00:29:04.713 "trsvcid": "4420", 00:29:04.713 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:04.713 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:04.713 "hdgst": false, 00:29:04.713 "ddgst": false 00:29:04.713 }, 00:29:04.713 "method": "bdev_nvme_attach_controller" 00:29:04.713 }' 00:29:04.713 [2024-11-18 20:30:16.672066] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:04.713 [2024-11-18 20:30:16.672145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325583 ] 00:29:04.972 [2024-11-18 20:30:16.745668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.972 [2024-11-18 20:30:16.792479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.877 Running I/O for 1 seconds... 00:29:07.704 1824.00 IOPS, 114.00 MiB/s 00:29:07.704 Latency(us) 00:29:07.704 [2024-11-18T19:30:19.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.704 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.704 Verification LBA range: start 0x0 length 0x400 00:29:07.704 Nvme1n1 : 1.13 229.09 14.32 0.00 0.00 275134.88 8835.22 254765.13 00:29:07.704 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.704 Verification LBA range: start 0x0 length 0x400 00:29:07.704 Nvme2n1 : 1.09 233.94 14.62 0.00 0.00 266022.31 18252.99 267192.70 00:29:07.704 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.704 Verification LBA range: start 0x0 length 0x400 00:29:07.704 Nvme3n1 : 1.11 236.90 14.81 0.00 0.00 257694.94 5267.15 256318.58 00:29:07.704 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.704 Verification LBA range: start 0x0 length 0x400 00:29:07.704 Nvme4n1 : 1.13 271.20 16.95 0.00 0.00 219381.38 10000.31 239230.67 00:29:07.704 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.704 Verification LBA range: start 0x0 length 0x400 00:29:07.704 Nvme5n1 : 1.16 220.75 13.80 0.00 0.00 268381.30 28738.75 260978.92 00:29:07.704 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.704 Verification LBA range: start 0x0 length 0x400 00:29:07.704 Nvme6n1 : 1.12 228.67 14.29 0.00 0.00 255017.72 19612.25 253211.69 00:29:07.704 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.704 Verification LBA range: start 0x0 length 0x400 00:29:07.704 Nvme7n1 : 1.16 220.13 13.76 0.00 0.00 261416.01 20777.34 254765.13 00:29:07.704 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.704 Verification LBA range: start 0x0 length 0x400 00:29:07.704 Nvme8n1 : 1.13 226.59 14.16 0.00 0.00 248816.07 32816.55 245444.46 00:29:07.704 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.704 Verification LBA range: start 0x0 length 0x400 00:29:07.704 Nvme9n1 : 1.17 222.06 13.88 0.00 0.00 250683.23 20486.07 284280.60 00:29:07.704 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.704 Verification LBA range: start 0x0 length 0x400 00:29:07.704 Nvme10n1 : 1.18 271.68 16.98 0.00 0.00 201386.40 5024.43 254765.13 00:29:07.704 [2024-11-18T19:30:19.712Z] =================================================================================================================== 00:29:07.704 [2024-11-18T19:30:19.712Z] Total : 2361.00 147.56 0.00 0.00 248699.06 5024.43 284280.60 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:07.965 rmmod nvme_tcp 00:29:07.965 rmmod nvme_fabrics 00:29:07.965 rmmod nvme_keyring 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 324986 ']' 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 324986 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 324986 ']' 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 324986 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 324986 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 324986' 00:29:07.965 killing process with pid 324986 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 324986 00:29:07.965 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 324986 00:29:08.532 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:08.532 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:08.532 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:08.532 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:08.532 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:08.532 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:08.532 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:08.532 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:08.532 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:08.532 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.532 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.532 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.443 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:10.443 00:29:10.443 real 0m12.041s 00:29:10.443 user 0m34.814s 00:29:10.443 sys 0m3.257s 00:29:10.443 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.443 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:10.443 ************************************ 00:29:10.443 END TEST nvmf_shutdown_tc1 00:29:10.443 ************************************ 00:29:10.443 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:10.444 ************************************ 00:29:10.444 START TEST nvmf_shutdown_tc2 00:29:10.444 ************************************ 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:10.444 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:10.444 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.444 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:10.704 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:10.704 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:10.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:29:10.704 00:29:10.704 --- 10.0.0.2 ping statistics --- 00:29:10.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.704 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:29:10.704 00:29:10.704 --- 10.0.0.1 ping statistics --- 00:29:10.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.704 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=326348 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 326348 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 326348 ']' 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.704 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.704 [2024-11-18 20:30:22.670113] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:10.704 [2024-11-18 20:30:22.670182] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.964 [2024-11-18 20:30:22.746991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.964 [2024-11-18 20:30:22.796027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.964 [2024-11-18 20:30:22.796084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.964 [2024-11-18 20:30:22.796113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.964 [2024-11-18 20:30:22.796125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.964 [2024-11-18 20:30:22.796135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.964 [2024-11-18 20:30:22.797728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.964 [2024-11-18 20:30:22.797791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:10.964 [2024-11-18 20:30:22.797813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:10.964 [2024-11-18 20:30:22.797815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.964 [2024-11-18 20:30:22.947230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.964 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.225 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.225 Malloc1 00:29:11.225 [2024-11-18 20:30:23.047613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.225 Malloc2 00:29:11.225 Malloc3 00:29:11.225 Malloc4 00:29:11.225 Malloc5 00:29:11.486 Malloc6 00:29:11.486 Malloc7 00:29:11.486 Malloc8 00:29:11.486 Malloc9 00:29:11.486 Malloc10 00:29:11.486 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.486 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:11.486 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:11.486 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.745 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=326526 00:29:11.745 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 326526 /var/tmp/bdevperf.sock 00:29:11.745 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 326526 ']' 00:29:11.745 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:11.745 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:11.745 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:11.745 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.745 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:11.745 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:11.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:11.745 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:11.745 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.745 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.745 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.745 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.745 { 00:29:11.745 "params": { 00:29:11.745 "name": "Nvme$subsystem", 00:29:11.745 "trtype": "$TEST_TRANSPORT", 00:29:11.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.745 "adrfam": "ipv4", 00:29:11.745 "trsvcid": "$NVMF_PORT", 00:29:11.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.745 "hdgst": ${hdgst:-false}, 00:29:11.745 "ddgst": ${ddgst:-false} 00:29:11.745 }, 00:29:11.745 "method": "bdev_nvme_attach_controller" 00:29:11.745 } 00:29:11.745 EOF 00:29:11.745 )") 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.746 { 00:29:11.746 "params": { 00:29:11.746 "name": "Nvme$subsystem", 00:29:11.746 "trtype": "$TEST_TRANSPORT", 00:29:11.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.746 "adrfam": "ipv4", 00:29:11.746 "trsvcid": "$NVMF_PORT", 00:29:11.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.746 "hdgst": ${hdgst:-false}, 00:29:11.746 "ddgst": ${ddgst:-false} 00:29:11.746 }, 00:29:11.746 "method": "bdev_nvme_attach_controller" 00:29:11.746 } 00:29:11.746 EOF 00:29:11.746 )") 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.746 { 00:29:11.746 "params": { 00:29:11.746 "name": "Nvme$subsystem", 00:29:11.746 "trtype": "$TEST_TRANSPORT", 00:29:11.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.746 "adrfam": "ipv4", 00:29:11.746 "trsvcid": "$NVMF_PORT", 00:29:11.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.746 "hdgst": ${hdgst:-false}, 00:29:11.746 "ddgst": ${ddgst:-false} 00:29:11.746 }, 00:29:11.746 "method": "bdev_nvme_attach_controller" 00:29:11.746 } 00:29:11.746 EOF 00:29:11.746 )") 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.746 { 00:29:11.746 "params": { 00:29:11.746 "name": "Nvme$subsystem", 00:29:11.746 "trtype": "$TEST_TRANSPORT", 00:29:11.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.746 "adrfam": "ipv4", 00:29:11.746 "trsvcid": "$NVMF_PORT", 00:29:11.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.746 "hdgst": ${hdgst:-false}, 00:29:11.746 "ddgst": ${ddgst:-false} 00:29:11.746 }, 00:29:11.746 "method": "bdev_nvme_attach_controller" 00:29:11.746 } 00:29:11.746 EOF 00:29:11.746 )") 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.746 { 00:29:11.746 "params": { 00:29:11.746 "name": "Nvme$subsystem", 00:29:11.746 "trtype": "$TEST_TRANSPORT", 00:29:11.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.746 "adrfam": "ipv4", 00:29:11.746 "trsvcid": "$NVMF_PORT", 00:29:11.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.746 "hdgst": ${hdgst:-false}, 00:29:11.746 "ddgst": ${ddgst:-false} 00:29:11.746 }, 00:29:11.746 "method": "bdev_nvme_attach_controller" 00:29:11.746 } 00:29:11.746 EOF 00:29:11.746 )") 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.746 { 00:29:11.746 "params": { 00:29:11.746 "name": "Nvme$subsystem", 00:29:11.746 "trtype": "$TEST_TRANSPORT", 00:29:11.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.746 "adrfam": "ipv4", 00:29:11.746 "trsvcid": "$NVMF_PORT", 00:29:11.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.746 "hdgst": ${hdgst:-false}, 00:29:11.746 "ddgst": ${ddgst:-false} 00:29:11.746 }, 00:29:11.746 "method": "bdev_nvme_attach_controller" 00:29:11.746 } 00:29:11.746 EOF 00:29:11.746 )") 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.746 { 00:29:11.746 "params": { 00:29:11.746 "name": "Nvme$subsystem", 00:29:11.746 "trtype": "$TEST_TRANSPORT", 00:29:11.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.746 "adrfam": "ipv4", 00:29:11.746 "trsvcid": "$NVMF_PORT", 00:29:11.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.746 "hdgst": ${hdgst:-false}, 00:29:11.746 "ddgst": ${ddgst:-false} 00:29:11.746 }, 00:29:11.746 "method": "bdev_nvme_attach_controller" 00:29:11.746 } 00:29:11.746 EOF 00:29:11.746 )") 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.746 { 00:29:11.746 "params": { 00:29:11.746 "name": "Nvme$subsystem", 00:29:11.746 "trtype": "$TEST_TRANSPORT", 00:29:11.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.746 "adrfam": "ipv4", 00:29:11.746 "trsvcid": "$NVMF_PORT", 00:29:11.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.746 "hdgst": ${hdgst:-false}, 00:29:11.746 "ddgst": ${ddgst:-false} 00:29:11.746 }, 00:29:11.746 "method": "bdev_nvme_attach_controller" 00:29:11.746 } 00:29:11.746 EOF 00:29:11.746 )") 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.746 { 00:29:11.746 "params": { 00:29:11.746 "name": "Nvme$subsystem", 00:29:11.746 "trtype": "$TEST_TRANSPORT", 00:29:11.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.746 "adrfam": "ipv4", 00:29:11.746 "trsvcid": "$NVMF_PORT", 00:29:11.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.746 "hdgst": ${hdgst:-false}, 00:29:11.746 "ddgst": ${ddgst:-false} 00:29:11.746 }, 00:29:11.746 "method": "bdev_nvme_attach_controller" 00:29:11.746 } 00:29:11.746 EOF 00:29:11.746 )") 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.746 { 00:29:11.746 "params": { 00:29:11.746 "name": "Nvme$subsystem", 00:29:11.746 "trtype": "$TEST_TRANSPORT", 00:29:11.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.746 "adrfam": "ipv4", 00:29:11.746 "trsvcid": "$NVMF_PORT", 00:29:11.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.746 "hdgst": ${hdgst:-false}, 00:29:11.746 "ddgst": ${ddgst:-false} 00:29:11.746 }, 00:29:11.746 "method": "bdev_nvme_attach_controller" 00:29:11.746 } 00:29:11.746 EOF 00:29:11.746 )") 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:11.746 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:11.746 "params": { 00:29:11.746 "name": "Nvme1", 00:29:11.746 "trtype": "tcp", 00:29:11.746 "traddr": "10.0.0.2", 00:29:11.746 "adrfam": "ipv4", 00:29:11.746 "trsvcid": "4420", 00:29:11.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:11.746 "hdgst": false, 00:29:11.746 "ddgst": false 00:29:11.746 }, 00:29:11.746 "method": "bdev_nvme_attach_controller" 00:29:11.746 },{ 00:29:11.746 "params": { 00:29:11.746 "name": "Nvme2", 00:29:11.746 "trtype": "tcp", 00:29:11.746 "traddr": "10.0.0.2", 00:29:11.746 "adrfam": "ipv4", 00:29:11.746 "trsvcid": "4420", 00:29:11.746 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:11.746 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:11.746 "hdgst": false, 00:29:11.746 "ddgst": false 00:29:11.746 }, 00:29:11.746 "method": "bdev_nvme_attach_controller" 00:29:11.746 },{ 00:29:11.746 "params": { 00:29:11.746 "name": "Nvme3", 00:29:11.746 "trtype": "tcp", 00:29:11.746 "traddr": "10.0.0.2", 00:29:11.746 "adrfam": "ipv4", 00:29:11.746 "trsvcid": "4420", 00:29:11.746 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:11.747 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:11.747 "hdgst": false, 00:29:11.747 "ddgst": false 00:29:11.747 }, 00:29:11.747 "method": "bdev_nvme_attach_controller" 00:29:11.747 },{ 00:29:11.747 "params": { 00:29:11.747 "name": "Nvme4", 00:29:11.747 "trtype": "tcp", 00:29:11.747 "traddr": "10.0.0.2", 00:29:11.747 "adrfam": "ipv4", 00:29:11.747 "trsvcid": "4420", 00:29:11.747 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:11.747 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:11.747 "hdgst": false, 00:29:11.747 "ddgst": false 00:29:11.747 }, 00:29:11.747 "method": "bdev_nvme_attach_controller" 00:29:11.747 },{ 00:29:11.747 "params": { 00:29:11.747 "name": "Nvme5", 00:29:11.747 "trtype": "tcp", 00:29:11.747 "traddr": "10.0.0.2", 00:29:11.747 "adrfam": "ipv4", 00:29:11.747 "trsvcid": "4420", 00:29:11.747 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:11.747 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:11.747 "hdgst": false, 00:29:11.747 "ddgst": false 00:29:11.747 }, 00:29:11.747 "method": "bdev_nvme_attach_controller" 00:29:11.747 },{ 00:29:11.747 "params": { 00:29:11.747 "name": "Nvme6", 00:29:11.747 "trtype": "tcp", 00:29:11.747 "traddr": "10.0.0.2", 00:29:11.747 "adrfam": "ipv4", 00:29:11.747 "trsvcid": "4420", 00:29:11.747 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:11.747 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:11.747 "hdgst": false, 00:29:11.747 "ddgst": false 00:29:11.747 }, 00:29:11.747 "method": "bdev_nvme_attach_controller" 00:29:11.747 },{ 00:29:11.747 "params": { 00:29:11.747 "name": "Nvme7", 00:29:11.747 "trtype": "tcp", 00:29:11.747 "traddr": "10.0.0.2", 00:29:11.747 "adrfam": "ipv4", 00:29:11.747 "trsvcid": "4420", 00:29:11.747 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:11.747 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:11.747 "hdgst": false, 00:29:11.747 "ddgst": false 00:29:11.747 }, 00:29:11.747 "method": "bdev_nvme_attach_controller" 00:29:11.747 },{ 00:29:11.747 "params": { 00:29:11.747 "name": "Nvme8", 00:29:11.747 "trtype": "tcp", 00:29:11.747 "traddr": "10.0.0.2", 00:29:11.747 "adrfam": "ipv4", 00:29:11.747 "trsvcid": "4420", 00:29:11.747 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:11.747 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:11.747 "hdgst": false, 00:29:11.747 "ddgst": false 00:29:11.747 }, 00:29:11.747 "method": "bdev_nvme_attach_controller" 00:29:11.747 },{ 00:29:11.747 "params": { 00:29:11.747 "name": "Nvme9", 00:29:11.747 "trtype": "tcp", 00:29:11.747 "traddr": "10.0.0.2", 00:29:11.747 "adrfam": "ipv4", 00:29:11.747 "trsvcid": "4420", 00:29:11.747 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:11.747 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:11.747 "hdgst": false, 00:29:11.747 "ddgst": false 00:29:11.747 }, 00:29:11.747 "method": "bdev_nvme_attach_controller" 00:29:11.747 },{ 00:29:11.747 "params": { 00:29:11.747 "name": "Nvme10", 00:29:11.747 "trtype": "tcp", 00:29:11.747 "traddr": "10.0.0.2", 00:29:11.747 "adrfam": "ipv4", 00:29:11.747 "trsvcid": "4420", 00:29:11.747 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:11.747 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:11.747 "hdgst": false, 00:29:11.747 "ddgst": false 00:29:11.747 }, 00:29:11.747 "method": "bdev_nvme_attach_controller" 00:29:11.747 }' 00:29:11.747 [2024-11-18 20:30:23.568132] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:11.747 [2024-11-18 20:30:23.568207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326526 ] 00:29:11.747 [2024-11-18 20:30:23.639324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.747 [2024-11-18 20:30:23.686801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.125 Running I/O for 10 seconds... 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 326526 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 326526 ']' 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 326526 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.693 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 326526 00:29:13.954 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:13.954 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:13.954 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 326526' 00:29:13.954 killing process with pid 326526 00:29:13.954 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 326526 00:29:13.954 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 326526 00:29:13.954 Received shutdown signal, test time was about 0.797503 seconds 00:29:13.954 00:29:13.954 Latency(us) 00:29:13.954 [2024-11-18T19:30:25.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.954 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.954 Verification LBA range: start 0x0 length 0x400 00:29:13.954 Nvme1n1 : 0.79 244.50 15.28 0.00 0.00 258226.00 21554.06 257872.02 00:29:13.954 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.954 Verification LBA range: start 0x0 length 0x400 00:29:13.954 Nvme2n1 : 0.80 241.09 15.07 0.00 0.00 255742.36 21554.06 254765.13 00:29:13.954 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.954 Verification LBA range: start 0x0 length 0x400 00:29:13.954 Nvme3n1 : 0.77 249.60 15.60 0.00 0.00 240476.67 16117.00 259425.47 00:29:13.954 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.954 Verification LBA range: start 0x0 length 0x400 00:29:13.954 Nvme4n1 : 0.77 260.02 16.25 0.00 0.00 222281.94 6796.33 243891.01 00:29:13.954 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.954 Verification LBA range: start 0x0 length 0x400 00:29:13.954 Nvme5n1 : 0.79 241.81 15.11 0.00 0.00 235776.63 19029.71 256318.58 00:29:13.954 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.954 Verification LBA range: start 0x0 length 0x400 00:29:13.954 Nvme6n1 : 0.78 247.11 15.44 0.00 0.00 224989.74 38836.15 215928.98 00:29:13.954 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.954 Verification LBA range: start 0x0 length 0x400 00:29:13.954 Nvme7n1 : 0.78 246.32 15.39 0.00 0.00 219908.17 20000.62 246997.90 00:29:13.954 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.954 Verification LBA range: start 0x0 length 0x400 00:29:13.954 Nvme8n1 : 0.79 243.40 15.21 0.00 0.00 217039.71 18738.44 257872.02 00:29:13.954 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.954 Verification LBA range: start 0x0 length 0x400 00:29:13.954 Nvme9n1 : 0.76 168.13 10.51 0.00 0.00 301057.90 22330.79 271853.04 00:29:13.954 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.954 Verification LBA range: start 0x0 length 0x400 00:29:13.954 Nvme10n1 : 0.75 170.48 10.65 0.00 0.00 288667.50 20000.62 278066.82 00:29:13.954 [2024-11-18T19:30:25.962Z] =================================================================================================================== 00:29:13.954 [2024-11-18T19:30:25.962Z] Total : 2312.45 144.53 0.00 0.00 242875.78 6796.33 278066.82 00:29:14.213 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 326348 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.149 rmmod nvme_tcp 00:29:15.149 rmmod nvme_fabrics 00:29:15.149 rmmod nvme_keyring 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 326348 ']' 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 326348 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 326348 ']' 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 326348 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 326348 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 326348' 00:29:15.149 killing process with pid 326348 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 326348 00:29:15.149 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 326348 00:29:15.717 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.717 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.717 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.717 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:15.717 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:15.717 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.717 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.717 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.717 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.717 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.717 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.717 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.255 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:18.255 00:29:18.255 real 0m7.222s 00:29:18.255 user 0m21.278s 00:29:18.255 sys 0m1.455s 00:29:18.255 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.255 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:18.255 ************************************ 00:29:18.255 END TEST nvmf_shutdown_tc2 00:29:18.255 ************************************ 00:29:18.255 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:18.255 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:18.255 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.255 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:18.255 ************************************ 00:29:18.255 START TEST nvmf_shutdown_tc3 00:29:18.255 ************************************ 00:29:18.255 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:18.255 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:18.255 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:18.255 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:18.256 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:18.256 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:18.256 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:18.256 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:18.256 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:18.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:29:18.257 00:29:18.257 --- 10.0.0.2 ping statistics --- 00:29:18.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.257 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:29:18.257 00:29:18.257 --- 10.0.0.1 ping statistics --- 00:29:18.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.257 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=327336 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 327336 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 327336 ']' 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.257 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:18.257 [2024-11-18 20:30:29.941752] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:18.257 [2024-11-18 20:30:29.941843] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.257 [2024-11-18 20:30:30.026019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:18.257 [2024-11-18 20:30:30.078593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.257 [2024-11-18 20:30:30.078671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.257 [2024-11-18 20:30:30.078693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.257 [2024-11-18 20:30:30.078705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.257 [2024-11-18 20:30:30.078715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.257 [2024-11-18 20:30:30.080428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:18.257 [2024-11-18 20:30:30.080478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.257 [2024-11-18 20:30:30.080538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:18.257 [2024-11-18 20:30:30.080540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:18.257 [2024-11-18 20:30:30.234188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:18.257 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:18.517 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:18.517 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:18.517 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:18.517 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:18.517 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:18.517 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:18.517 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:18.517 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:18.517 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:18.517 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:18.517 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:18.517 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.517 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:18.517 Malloc1 00:29:18.517 [2024-11-18 20:30:30.339013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.517 Malloc2 00:29:18.517 Malloc3 00:29:18.517 Malloc4 00:29:18.517 Malloc5 00:29:18.777 Malloc6 00:29:18.777 Malloc7 00:29:18.777 Malloc8 00:29:18.777 Malloc9 00:29:18.777 Malloc10 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=327487 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 327487 /var/tmp/bdevperf.sock 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 327487 ']' 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:19.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:19.037 { 00:29:19.037 "params": { 00:29:19.037 "name": "Nvme$subsystem", 00:29:19.037 "trtype": "$TEST_TRANSPORT", 00:29:19.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.037 "adrfam": "ipv4", 00:29:19.037 "trsvcid": "$NVMF_PORT", 00:29:19.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.037 "hdgst": ${hdgst:-false}, 00:29:19.037 "ddgst": ${ddgst:-false} 00:29:19.037 }, 00:29:19.037 "method": "bdev_nvme_attach_controller" 00:29:19.037 } 00:29:19.037 EOF 00:29:19.037 )") 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:19.037 { 00:29:19.037 "params": { 00:29:19.037 "name": "Nvme$subsystem", 00:29:19.037 "trtype": "$TEST_TRANSPORT", 00:29:19.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.037 "adrfam": "ipv4", 00:29:19.037 "trsvcid": "$NVMF_PORT", 00:29:19.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.037 "hdgst": ${hdgst:-false}, 00:29:19.037 "ddgst": ${ddgst:-false} 00:29:19.037 }, 00:29:19.037 "method": "bdev_nvme_attach_controller" 00:29:19.037 } 00:29:19.037 EOF 00:29:19.037 )") 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:19.037 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:19.037 { 00:29:19.037 "params": { 00:29:19.037 "name": "Nvme$subsystem", 00:29:19.037 "trtype": "$TEST_TRANSPORT", 00:29:19.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.037 "adrfam": "ipv4", 00:29:19.037 "trsvcid": "$NVMF_PORT", 00:29:19.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.037 "hdgst": ${hdgst:-false}, 00:29:19.037 "ddgst": ${ddgst:-false} 00:29:19.038 }, 00:29:19.038 "method": "bdev_nvme_attach_controller" 00:29:19.038 } 00:29:19.038 EOF 00:29:19.038 )") 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:19.038 { 00:29:19.038 "params": { 00:29:19.038 "name": "Nvme$subsystem", 00:29:19.038 "trtype": "$TEST_TRANSPORT", 00:29:19.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.038 "adrfam": "ipv4", 00:29:19.038 "trsvcid": "$NVMF_PORT", 00:29:19.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.038 "hdgst": ${hdgst:-false}, 00:29:19.038 "ddgst": ${ddgst:-false} 00:29:19.038 }, 00:29:19.038 "method": "bdev_nvme_attach_controller" 00:29:19.038 } 00:29:19.038 EOF 00:29:19.038 )") 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:19.038 { 00:29:19.038 "params": { 00:29:19.038 "name": "Nvme$subsystem", 00:29:19.038 "trtype": "$TEST_TRANSPORT", 00:29:19.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.038 "adrfam": "ipv4", 00:29:19.038 "trsvcid": "$NVMF_PORT", 00:29:19.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.038 "hdgst": ${hdgst:-false}, 00:29:19.038 "ddgst": ${ddgst:-false} 00:29:19.038 }, 00:29:19.038 "method": "bdev_nvme_attach_controller" 00:29:19.038 } 00:29:19.038 EOF 00:29:19.038 )") 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:19.038 { 00:29:19.038 "params": { 00:29:19.038 "name": "Nvme$subsystem", 00:29:19.038 "trtype": "$TEST_TRANSPORT", 00:29:19.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.038 "adrfam": "ipv4", 00:29:19.038 "trsvcid": "$NVMF_PORT", 00:29:19.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.038 "hdgst": ${hdgst:-false}, 00:29:19.038 "ddgst": ${ddgst:-false} 00:29:19.038 }, 00:29:19.038 "method": "bdev_nvme_attach_controller" 00:29:19.038 } 00:29:19.038 EOF 00:29:19.038 )") 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:19.038 { 00:29:19.038 "params": { 00:29:19.038 "name": "Nvme$subsystem", 00:29:19.038 "trtype": "$TEST_TRANSPORT", 00:29:19.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.038 "adrfam": "ipv4", 00:29:19.038 "trsvcid": "$NVMF_PORT", 00:29:19.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.038 "hdgst": ${hdgst:-false}, 00:29:19.038 "ddgst": ${ddgst:-false} 00:29:19.038 }, 00:29:19.038 "method": "bdev_nvme_attach_controller" 00:29:19.038 } 00:29:19.038 EOF 00:29:19.038 )") 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:19.038 { 00:29:19.038 "params": { 00:29:19.038 "name": "Nvme$subsystem", 00:29:19.038 "trtype": "$TEST_TRANSPORT", 00:29:19.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.038 "adrfam": "ipv4", 00:29:19.038 "trsvcid": "$NVMF_PORT", 00:29:19.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.038 "hdgst": ${hdgst:-false}, 00:29:19.038 "ddgst": ${ddgst:-false} 00:29:19.038 }, 00:29:19.038 "method": "bdev_nvme_attach_controller" 00:29:19.038 } 00:29:19.038 EOF 00:29:19.038 )") 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:19.038 { 00:29:19.038 "params": { 00:29:19.038 "name": "Nvme$subsystem", 00:29:19.038 "trtype": "$TEST_TRANSPORT", 00:29:19.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.038 "adrfam": "ipv4", 00:29:19.038 "trsvcid": "$NVMF_PORT", 00:29:19.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.038 "hdgst": ${hdgst:-false}, 00:29:19.038 "ddgst": ${ddgst:-false} 00:29:19.038 }, 00:29:19.038 "method": "bdev_nvme_attach_controller" 00:29:19.038 } 00:29:19.038 EOF 00:29:19.038 )") 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:19.038 { 00:29:19.038 "params": { 00:29:19.038 "name": "Nvme$subsystem", 00:29:19.038 "trtype": "$TEST_TRANSPORT", 00:29:19.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.038 "adrfam": "ipv4", 00:29:19.038 "trsvcid": "$NVMF_PORT", 00:29:19.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.038 "hdgst": ${hdgst:-false}, 00:29:19.038 "ddgst": ${ddgst:-false} 00:29:19.038 }, 00:29:19.038 "method": "bdev_nvme_attach_controller" 00:29:19.038 } 00:29:19.038 EOF 00:29:19.038 )") 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:19.038 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:19.038 "params": { 00:29:19.038 "name": "Nvme1", 00:29:19.038 "trtype": "tcp", 00:29:19.038 "traddr": "10.0.0.2", 00:29:19.038 "adrfam": "ipv4", 00:29:19.038 "trsvcid": "4420", 00:29:19.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:19.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:19.038 "hdgst": false, 00:29:19.038 "ddgst": false 00:29:19.038 }, 00:29:19.038 "method": "bdev_nvme_attach_controller" 00:29:19.038 },{ 00:29:19.038 "params": { 00:29:19.038 "name": "Nvme2", 00:29:19.038 "trtype": "tcp", 00:29:19.038 "traddr": "10.0.0.2", 00:29:19.038 "adrfam": "ipv4", 00:29:19.038 "trsvcid": "4420", 00:29:19.038 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:19.038 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:19.038 "hdgst": false, 00:29:19.038 "ddgst": false 00:29:19.038 }, 00:29:19.038 "method": "bdev_nvme_attach_controller" 00:29:19.038 },{ 00:29:19.038 "params": { 00:29:19.038 "name": "Nvme3", 00:29:19.038 "trtype": "tcp", 00:29:19.038 "traddr": "10.0.0.2", 00:29:19.038 "adrfam": "ipv4", 00:29:19.038 "trsvcid": "4420", 00:29:19.038 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:19.038 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:19.038 "hdgst": false, 00:29:19.038 "ddgst": false 00:29:19.038 }, 00:29:19.038 "method": "bdev_nvme_attach_controller" 00:29:19.038 },{ 00:29:19.038 "params": { 00:29:19.038 "name": "Nvme4", 00:29:19.038 "trtype": "tcp", 00:29:19.038 "traddr": "10.0.0.2", 00:29:19.038 "adrfam": "ipv4", 00:29:19.038 "trsvcid": "4420", 00:29:19.038 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:19.038 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:19.038 "hdgst": false, 00:29:19.038 "ddgst": false 00:29:19.038 }, 00:29:19.038 "method": "bdev_nvme_attach_controller" 00:29:19.038 },{ 00:29:19.038 "params": { 00:29:19.038 "name": "Nvme5", 00:29:19.038 "trtype": "tcp", 00:29:19.038 "traddr": "10.0.0.2", 00:29:19.038 "adrfam": "ipv4", 00:29:19.038 "trsvcid": "4420", 00:29:19.038 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:19.038 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:19.038 "hdgst": false, 00:29:19.038 "ddgst": false 00:29:19.038 }, 00:29:19.038 "method": "bdev_nvme_attach_controller" 00:29:19.038 },{ 00:29:19.038 "params": { 00:29:19.038 "name": "Nvme6", 00:29:19.038 "trtype": "tcp", 00:29:19.039 "traddr": "10.0.0.2", 00:29:19.039 "adrfam": "ipv4", 00:29:19.039 "trsvcid": "4420", 00:29:19.039 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:19.039 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:19.039 "hdgst": false, 00:29:19.039 "ddgst": false 00:29:19.039 }, 00:29:19.039 "method": "bdev_nvme_attach_controller" 00:29:19.039 },{ 00:29:19.039 "params": { 00:29:19.039 "name": "Nvme7", 00:29:19.039 "trtype": "tcp", 00:29:19.039 "traddr": "10.0.0.2", 00:29:19.039 "adrfam": "ipv4", 00:29:19.039 "trsvcid": "4420", 00:29:19.039 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:19.039 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:19.039 "hdgst": false, 00:29:19.039 "ddgst": false 00:29:19.039 }, 00:29:19.039 "method": "bdev_nvme_attach_controller" 00:29:19.039 },{ 00:29:19.039 "params": { 00:29:19.039 "name": "Nvme8", 00:29:19.039 "trtype": "tcp", 00:29:19.039 "traddr": "10.0.0.2", 00:29:19.039 "adrfam": "ipv4", 00:29:19.039 "trsvcid": "4420", 00:29:19.039 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:19.039 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:19.039 "hdgst": false, 00:29:19.039 "ddgst": false 00:29:19.039 }, 00:29:19.039 "method": "bdev_nvme_attach_controller" 00:29:19.039 },{ 00:29:19.039 "params": { 00:29:19.039 "name": "Nvme9", 00:29:19.039 "trtype": "tcp", 00:29:19.039 "traddr": "10.0.0.2", 00:29:19.039 "adrfam": "ipv4", 00:29:19.039 "trsvcid": "4420", 00:29:19.039 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:19.039 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:19.039 "hdgst": false, 00:29:19.039 "ddgst": false 00:29:19.039 }, 00:29:19.039 "method": "bdev_nvme_attach_controller" 00:29:19.039 },{ 00:29:19.039 "params": { 00:29:19.039 "name": "Nvme10", 00:29:19.039 "trtype": "tcp", 00:29:19.039 "traddr": "10.0.0.2", 00:29:19.039 "adrfam": "ipv4", 00:29:19.039 "trsvcid": "4420", 00:29:19.039 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:19.039 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:19.039 "hdgst": false, 00:29:19.039 "ddgst": false 00:29:19.039 }, 00:29:19.039 "method": "bdev_nvme_attach_controller" 00:29:19.039 }' 00:29:19.039 [2024-11-18 20:30:30.867309] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:19.039 [2024-11-18 20:30:30.867397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327487 ] 00:29:19.039 [2024-11-18 20:30:30.939485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.039 [2024-11-18 20:30:30.986741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.944 Running I/O for 10 seconds... 00:29:21.202 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.202 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:21.202 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:21.202 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.202 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:21.202 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:21.202 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:21.461 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:21.461 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:21.461 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:21.461 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:21.461 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.461 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:21.461 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.461 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:21.461 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:21.461 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=156 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 156 -ge 100 ']' 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 327336 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 327336 ']' 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 327336 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 327336 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 327336' 00:29:21.725 killing process with pid 327336 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 327336 00:29:21.725 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 327336 00:29:21.725 [2024-11-18 20:30:33.683129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.725 [2024-11-18 20:30:33.683374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.725 [2024-11-18 20:30:33.683392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.725 [2024-11-18 20:30:33.683416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.725 [2024-11-18 20:30:33.683440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.725 [2024-11-18 20:30:33.683452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.683998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.684200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cc00 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.726 [2024-11-18 20:30:33.686255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.726 [2024-11-18 20:30:33.686275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.726 [2024-11-18 20:30:33.686289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.726 [2024-11-18 20:30:33.686303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.726 [2024-11-18 20:30:33.686328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.726 [2024-11-18 20:30:33.686342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.726 [2024-11-18 20:30:33.686356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.726 [2024-11-18 20:30:33.686372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2450 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.726 [2024-11-18 20:30:33.686830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.686843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.686855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.686868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.686880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.686892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.686905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.686917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.686930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.686943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.686955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.686968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.686980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.687473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f790 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.688621] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:21.727 [2024-11-18 20:30:33.692742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.692994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d0d0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.693869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d5a0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.693904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d5a0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.693921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d5a0 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.694940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.694980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.694996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.695008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.695021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.695034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.695056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.695075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.695088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.695100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.695112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.695125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.695138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.695150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.727 [2024-11-18 20:30:33.695163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.695806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7da90 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.728 [2024-11-18 20:30:33.697563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.697947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e430 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.699995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.700007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.700019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.729 [2024-11-18 20:30:33.700030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.700042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.700053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.700065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.700077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.700089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.700100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e900 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.701994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.702219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f2c0 is same with the state(6) to be set 00:29:21.730 [2024-11-18 20:30:33.709240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0700 is same with the state(6) to be set 00:29:21.731 [2024-11-18 20:30:33.709470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122de90 is same with the state(6) to be set 00:29:21.731 [2024-11-18 20:30:33.709647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1227120 is same with the state(6) to be set 00:29:21.731 [2024-11-18 20:30:33.709829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.709934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.709948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255b40 is same with the state(6) to be set 00:29:21.731 [2024-11-18 20:30:33.709998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124db90 is same with the state(6) to be set 00:29:21.731 [2024-11-18 20:30:33.710168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e800 is same with the state(6) to be set 00:29:21.731 [2024-11-18 20:30:33.710341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdec280 is same with the state(6) to be set 00:29:21.731 [2024-11-18 20:30:33.710508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfa0b0 is same with the state(6) to be set 00:29:21.731 [2024-11-18 20:30:33.710681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2450 (9): Bad file descriptor 00:29:21.731 [2024-11-18 20:30:33.710736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.731 [2024-11-18 20:30:33.710846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.731 [2024-11-18 20:30:33.710859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3af50 is same with the state(6) to be set 00:29:22.001 [2024-11-18 20:30:33.731038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.001 [2024-11-18 20:30:33.731682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.001 [2024-11-18 20:30:33.731698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.731714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.731730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.731744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.731760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.731776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.731793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.731808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.731825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.731839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.731866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.731883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.731899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.731914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.731930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.731944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.731961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.731975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.731991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.002 [2024-11-18 20:30:33.732902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.002 [2024-11-18 20:30:33.732916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.732933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.732947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.732963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.732977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.732994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.733008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.733025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.733044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.733061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.733075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.733091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.733106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.733122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.733137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.733153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.733167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.733183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a32f0 is same with the state(6) to be set 00:29:22.003 [2024-11-18 20:30:33.733537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf0700 (9): Bad file descriptor 00:29:22.003 [2024-11-18 20:30:33.733587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122de90 (9): Bad file descriptor 00:29:22.003 [2024-11-18 20:30:33.733620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1227120 (9): Bad file descriptor 00:29:22.003 [2024-11-18 20:30:33.733664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1255b40 (9): Bad file descriptor 00:29:22.003 [2024-11-18 20:30:33.733692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124db90 (9): Bad file descriptor 00:29:22.003 [2024-11-18 20:30:33.733720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122e800 (9): Bad file descriptor 00:29:22.003 [2024-11-18 20:30:33.733752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdec280 (9): Bad file descriptor 00:29:22.003 [2024-11-18 20:30:33.733782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdfa0b0 (9): Bad file descriptor 00:29:22.003 [2024-11-18 20:30:33.733820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd3af50 (9): Bad file descriptor 00:29:22.003 [2024-11-18 20:30:33.733970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.733994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.003 [2024-11-18 20:30:33.734779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.003 [2024-11-18 20:30:33.734792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.734814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.734829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.734845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.734859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.734875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.734893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.734909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.734923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.734939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.734953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.734969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.734983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.734999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.735953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.735967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.736177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.004 [2024-11-18 20:30:33.736201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.004 [2024-11-18 20:30:33.736222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.736970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.736984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.737007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.737021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.737037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.737055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.737072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.737086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.737102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.737115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.737131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.737145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.737161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.737175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.737191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.737205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.737221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.737235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.737251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.737265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.737281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.737295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.737310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.737325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.737341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.737354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.737369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.005 [2024-11-18 20:30:33.737383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.005 [2024-11-18 20:30:33.737399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.737418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.737434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.737448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.737463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.737477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.737499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.737514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.737529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.737543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.737559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.737574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.737589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.737603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.737618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.737632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.737656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.737670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.737686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.737700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.737716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.737730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.737746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.752851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.752944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.752961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.752992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.753007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.753024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.753040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.753057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.753071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.753087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.753101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.753116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.753131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.753147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.753161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.753179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.753194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.753210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.753224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.753241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.753255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.753271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.753285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.753301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.753315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.753331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.753345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.753363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf5720 is same with the state(6) to be set 00:29:22.006 [2024-11-18 20:30:33.755037] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:22.006 [2024-11-18 20:30:33.755220] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:22.006 [2024-11-18 20:30:33.755250] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:22.006 [2024-11-18 20:30:33.755278] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:22.006 [2024-11-18 20:30:33.755300] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:22.006 [2024-11-18 20:30:33.755377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.755399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.755426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.755443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.755460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.755475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.755491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.755505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.755522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.755537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.006 [2024-11-18 20:30:33.755554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.006 [2024-11-18 20:30:33.755567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.755584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.755598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.755616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.755631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.755658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.755680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.755696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.755711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.755727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.755747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.755765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.755779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.755795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.755810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.755827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.755841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.755857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.755871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.755888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.755902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.755919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.755933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.755949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.755964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.755981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.755995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.007 [2024-11-18 20:30:33.756818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.007 [2024-11-18 20:30:33.756835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.756850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.756866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.756880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.756896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.756911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.756931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.756946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.756963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.756978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.756994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.757008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.757025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.757039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.757056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.757070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.757087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.757101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.757118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.757133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.757149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.757164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.757180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.757194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.757211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.757226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.757242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.757256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.757273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.757287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.757303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.757321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.757338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.757352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.757369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.757383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.757400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.757415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.757429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cb8b0 is same with the state(6) to be set 00:29:22.008 [2024-11-18 20:30:33.761472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.761974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.761990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.762004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.762021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.762035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.762051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.762066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.762083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.762097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.762114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.008 [2024-11-18 20:30:33.762128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.008 [2024-11-18 20:30:33.762144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.762978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.762995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.763009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.763025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.763039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.763054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.763069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.763085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.763100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.763116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.763130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.763146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.763160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.763177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.763191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.763207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.763221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.763237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.763252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.763268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.763282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.763298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.763312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.763328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.009 [2024-11-18 20:30:33.763346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.009 [2024-11-18 20:30:33.763362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.763376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.763393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.763407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.763423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.763437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.763453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.763466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.763482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.763497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.764772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.764797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.764818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.764836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.764853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.764867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.764884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.764898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.764915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.764929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.764946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.764960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.764977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.764991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.010 [2024-11-18 20:30:33.765714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.010 [2024-11-18 20:30:33.765729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.765744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.765760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.765774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.765789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.765807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.765824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.765839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.765855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.765869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.765885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.765899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.765916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.765930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.765945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.765959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.765975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.765989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.766774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.766788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.768046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.768070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.768100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.768116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.768132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.768146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.768162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.768176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.768192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.768206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.011 [2024-11-18 20:30:33.768221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.011 [2024-11-18 20:30:33.768236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.768971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.768985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.769000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.769015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.769034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.769048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.769064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.769079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.769100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.769114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.769129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.769143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.769159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.769173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.769189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.769203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.769219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.769233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.769248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.769262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.769278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.769292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.769307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.778879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.778968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.778987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.779004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.779018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.779034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.779061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.779079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.012 [2024-11-18 20:30:33.779094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.012 [2024-11-18 20:30:33.779110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.779700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.779715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.781401] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:22.013 [2024-11-18 20:30:33.781494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:22.013 [2024-11-18 20:30:33.781527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:22.013 [2024-11-18 20:30:33.781589] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:22.013 [2024-11-18 20:30:33.781622] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:22.013 [2024-11-18 20:30:33.781656] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:22.013 [2024-11-18 20:30:33.781681] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:22.013 [2024-11-18 20:30:33.781703] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:22.013 [2024-11-18 20:30:33.781722] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:22.013 [2024-11-18 20:30:33.781743] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:22.013 [2024-11-18 20:30:33.781771] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:22.013 [2024-11-18 20:30:33.783752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.783778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.783802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.783818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.783835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.783849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.783866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.783880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.783896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.783911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.783927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.783942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.783958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.783972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.783988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.784002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.784018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.784032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.784049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.784063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.784079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.784093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.784109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.784128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.784146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.784161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.784177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.784191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.784207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-11-18 20:30:33.784221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.013 [2024-11-18 20:30:33.784237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.784984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.784998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.785029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.785059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.785089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.785120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.785150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.785179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.785209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.785238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.785268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.785302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.785332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.785363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.785393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-11-18 20:30:33.785422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.014 [2024-11-18 20:30:33.785439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.785454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.785470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.785484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.785500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.785514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.785530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.785544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.785559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.785573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.785589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.785603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.785619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.785633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.785658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.785673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.785693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.785707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.785723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.785737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.798856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.798941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.798982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.798998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.015 [2024-11-18 20:30:33.799749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.015 [2024-11-18 20:30:33.799766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.799780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.799797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.799812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.799828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.799842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.799858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.799872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.799888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.799902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.799918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.799932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.799948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.799962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.799978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.799991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.016 [2024-11-18 20:30:33.800948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.016 [2024-11-18 20:30:33.800964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a4820 is same with the state(6) to be set 00:29:22.017 [2024-11-18 20:30:33.803856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.803887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.803910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.803929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.803946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.803961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.803978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.803992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.804985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.804999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.805015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.805029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.805045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.805059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.805076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.017 [2024-11-18 20:30:33.805090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.017 [2024-11-18 20:30:33.805105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.018 [2024-11-18 20:30:33.805870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.018 [2024-11-18 20:30:33.805885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5d40 is same with the state(6) to be set 00:29:22.018 [2024-11-18 20:30:33.809682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:22.018 [2024-11-18 20:30:33.809732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:22.018 [2024-11-18 20:30:33.809757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:22.018 [2024-11-18 20:30:33.809776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:22.018 [2024-11-18 20:30:33.809795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:22.018 [2024-11-18 20:30:33.810057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.018 [2024-11-18 20:30:33.810089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2450 with addr=10.0.0.2, port=4420 00:29:22.018 [2024-11-18 20:30:33.810108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2450 is same with the state(6) to be set 00:29:22.018 [2024-11-18 20:30:33.810198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.018 [2024-11-18 20:30:33.810223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdec280 with addr=10.0.0.2, port=4420 00:29:22.018 [2024-11-18 20:30:33.810240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdec280 is same with the state(6) to be set 00:29:22.018 [2024-11-18 20:30:33.810296] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:22.018 [2024-11-18 20:30:33.810322] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:22.018 [2024-11-18 20:30:33.810346] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:22.018 [2024-11-18 20:30:33.810376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdec280 (9): Bad file descriptor 00:29:22.018 [2024-11-18 20:30:33.810410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2450 (9): Bad file descriptor 00:29:22.018 [2024-11-18 20:30:33.811368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:22.018 [2024-11-18 20:30:33.811398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:22.018 task offset: 24576 on job bdev=Nvme8n1 fails 00:29:22.018 1689.32 IOPS, 105.58 MiB/s 00:29:22.018 Latency(us) 00:29:22.018 [2024-11-18T19:30:34.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.018 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:22.018 Job: Nvme1n1 ended in about 0.98 seconds with error 00:29:22.018 Verification LBA range: start 0x0 length 0x400 00:29:22.018 Nvme1n1 : 0.98 194.93 12.18 64.98 0.00 243647.15 28932.93 242337.56 00:29:22.018 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:22.018 Job: Nvme2n1 ended in about 1.01 seconds with error 00:29:22.018 Verification LBA range: start 0x0 length 0x400 00:29:22.018 Nvme2n1 : 1.01 189.20 11.83 63.07 0.00 246566.87 19223.89 240784.12 00:29:22.018 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:22.018 Job: Nvme3n1 ended in about 0.99 seconds with error 00:29:22.018 Verification LBA range: start 0x0 length 0x400 00:29:22.018 Nvme3n1 : 0.99 194.68 12.17 64.89 0.00 234840.94 17573.36 262532.36 00:29:22.019 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:22.019 Job: Nvme4n1 ended in about 0.99 seconds with error 00:29:22.019 Verification LBA range: start 0x0 length 0x400 00:29:22.019 Nvme4n1 : 0.99 197.78 12.36 64.58 0.00 227920.39 18544.26 256318.58 00:29:22.019 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:22.019 Job: Nvme5n1 ended in about 0.99 seconds with error 00:29:22.019 Verification LBA range: start 0x0 length 0x400 00:29:22.019 Nvme5n1 : 0.99 194.45 12.15 64.82 0.00 225971.20 20097.71 264085.81 00:29:22.019 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:22.019 Job: Nvme6n1 ended in about 0.99 seconds with error 00:29:22.019 Verification LBA range: start 0x0 length 0x400 00:29:22.019 Nvme6n1 : 0.99 128.74 8.05 64.37 0.00 297679.08 28544.57 264085.81 00:29:22.019 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:22.019 Job: Nvme7n1 ended in about 1.01 seconds with error 00:29:22.019 Verification LBA range: start 0x0 length 0x400 00:29:22.019 Nvme7n1 : 1.01 127.08 7.94 63.54 0.00 296073.80 19126.80 274959.93 00:29:22.019 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:22.019 Job: Nvme8n1 ended in about 0.98 seconds with error 00:29:22.019 Verification LBA range: start 0x0 length 0x400 00:29:22.019 Nvme8n1 : 0.98 195.66 12.23 65.22 0.00 211123.58 21942.42 243891.01 00:29:22.019 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:22.019 Job: Nvme9n1 ended in about 1.03 seconds with error 00:29:22.019 Verification LBA range: start 0x0 length 0x400 00:29:22.019 Nvme9n1 : 1.03 128.16 8.01 62.14 0.00 285962.84 19709.35 270299.59 00:29:22.019 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:22.019 Job: Nvme10n1 ended in about 1.03 seconds with error 00:29:22.019 Verification LBA range: start 0x0 length 0x400 00:29:22.019 Nvme10n1 : 1.03 127.55 7.97 61.84 0.00 281694.33 19709.35 288940.94 00:29:22.019 [2024-11-18T19:30:34.027Z] =================================================================================================================== 00:29:22.019 [2024-11-18T19:30:34.027Z] Total : 1678.23 104.89 639.44 0.00 251308.72 17573.36 288940.94 00:29:22.019 [2024-11-18 20:30:33.841566] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:22.019 [2024-11-18 20:30:33.841668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:22.019 [2024-11-18 20:30:33.841970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.019 [2024-11-18 20:30:33.842027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3af50 with addr=10.0.0.2, port=4420 00:29:22.019 [2024-11-18 20:30:33.842051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3af50 is same with the state(6) to be set 00:29:22.019 [2024-11-18 20:30:33.842162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.019 [2024-11-18 20:30:33.842189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf0700 with addr=10.0.0.2, port=4420 00:29:22.019 [2024-11-18 20:30:33.842207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0700 is same with the state(6) to be set 00:29:22.019 [2024-11-18 20:30:33.842293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.019 [2024-11-18 20:30:33.842318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x122e800 with addr=10.0.0.2, port=4420 00:29:22.019 [2024-11-18 20:30:33.842335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e800 is same with the state(6) to be set 00:29:22.019 [2024-11-18 20:30:33.842424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.019 [2024-11-18 20:30:33.842449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x122de90 with addr=10.0.0.2, port=4420 00:29:22.019 [2024-11-18 20:30:33.842465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122de90 is same with the state(6) to be set 00:29:22.019 [2024-11-18 20:30:33.842545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.019 [2024-11-18 20:30:33.842571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1227120 with addr=10.0.0.2, port=4420 00:29:22.019 [2024-11-18 20:30:33.842588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1227120 is same with the state(6) to be set 00:29:22.019 [2024-11-18 20:30:33.843692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.019 [2024-11-18 20:30:33.843724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdfa0b0 with addr=10.0.0.2, port=4420 00:29:22.019 [2024-11-18 20:30:33.843741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfa0b0 is same with the state(6) to be set 00:29:22.019 [2024-11-18 20:30:33.843831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.019 [2024-11-18 20:30:33.843862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1255b40 with addr=10.0.0.2, port=4420 00:29:22.019 [2024-11-18 20:30:33.843879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255b40 is same with the state(6) to be set 00:29:22.019 [2024-11-18 20:30:33.843974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.019 [2024-11-18 20:30:33.843999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124db90 with addr=10.0.0.2, port=4420 00:29:22.019 [2024-11-18 20:30:33.844015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124db90 is same with the state(6) to be set 00:29:22.019 [2024-11-18 20:30:33.844043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd3af50 (9): Bad file descriptor 00:29:22.019 [2024-11-18 20:30:33.844069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf0700 (9): Bad file descriptor 00:29:22.019 [2024-11-18 20:30:33.844088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122e800 (9): Bad file descriptor 00:29:22.019 [2024-11-18 20:30:33.844107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122de90 (9): Bad file descriptor 00:29:22.019 [2024-11-18 20:30:33.844126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1227120 (9): Bad file descriptor 00:29:22.019 [2024-11-18 20:30:33.844143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:22.019 [2024-11-18 20:30:33.844171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:22.019 [2024-11-18 20:30:33.844191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:22.019 [2024-11-18 20:30:33.844211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:22.019 [2024-11-18 20:30:33.844230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:22.019 [2024-11-18 20:30:33.844243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:22.019 [2024-11-18 20:30:33.844256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:22.019 [2024-11-18 20:30:33.844269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:22.019 [2024-11-18 20:30:33.844343] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:22.019 [2024-11-18 20:30:33.844373] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:22.019 [2024-11-18 20:30:33.844393] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:22.019 [2024-11-18 20:30:33.844414] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:22.019 [2024-11-18 20:30:33.844435] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:22.019 [2024-11-18 20:30:33.845120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdfa0b0 (9): Bad file descriptor 00:29:22.019 [2024-11-18 20:30:33.845151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1255b40 (9): Bad file descriptor 00:29:22.019 [2024-11-18 20:30:33.845171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124db90 (9): Bad file descriptor 00:29:22.019 [2024-11-18 20:30:33.845187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:22.019 [2024-11-18 20:30:33.845201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:22.019 [2024-11-18 20:30:33.845215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:22.019 [2024-11-18 20:30:33.845229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:22.019 [2024-11-18 20:30:33.845243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:22.019 [2024-11-18 20:30:33.845257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:22.019 [2024-11-18 20:30:33.845270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:22.019 [2024-11-18 20:30:33.845282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:22.019 [2024-11-18 20:30:33.845297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:22.019 [2024-11-18 20:30:33.845310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:22.019 [2024-11-18 20:30:33.845323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:22.020 [2024-11-18 20:30:33.845336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:22.020 [2024-11-18 20:30:33.845349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:22.020 [2024-11-18 20:30:33.845368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:22.020 [2024-11-18 20:30:33.845381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:22.020 [2024-11-18 20:30:33.845394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:22.020 [2024-11-18 20:30:33.845408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:22.020 [2024-11-18 20:30:33.845422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:22.020 [2024-11-18 20:30:33.845435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:22.020 [2024-11-18 20:30:33.845447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:22.020 [2024-11-18 20:30:33.845520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:22.020 [2024-11-18 20:30:33.845545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:22.020 [2024-11-18 20:30:33.845579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:22.020 [2024-11-18 20:30:33.845596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:22.020 [2024-11-18 20:30:33.845611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:22.020 [2024-11-18 20:30:33.845624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:22.020 [2024-11-18 20:30:33.845669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:22.020 [2024-11-18 20:30:33.845685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:22.020 [2024-11-18 20:30:33.845699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:22.020 [2024-11-18 20:30:33.845713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:22.020 [2024-11-18 20:30:33.845727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:22.020 [2024-11-18 20:30:33.845741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:22.020 [2024-11-18 20:30:33.845754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:22.020 [2024-11-18 20:30:33.845766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:22.020 [2024-11-18 20:30:33.845882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.020 [2024-11-18 20:30:33.845910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdec280 with addr=10.0.0.2, port=4420 00:29:22.020 [2024-11-18 20:30:33.845928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdec280 is same with the state(6) to be set 00:29:22.020 [2024-11-18 20:30:33.846008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.020 [2024-11-18 20:30:33.846034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2450 with addr=10.0.0.2, port=4420 00:29:22.020 [2024-11-18 20:30:33.846050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2450 is same with the state(6) to be set 00:29:22.020 [2024-11-18 20:30:33.846095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdec280 (9): Bad file descriptor 00:29:22.020 [2024-11-18 20:30:33.846120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2450 (9): Bad file descriptor 00:29:22.020 [2024-11-18 20:30:33.846165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:22.020 [2024-11-18 20:30:33.846183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:22.020 [2024-11-18 20:30:33.846198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:22.020 [2024-11-18 20:30:33.846211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:22.020 [2024-11-18 20:30:33.846226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:22.020 [2024-11-18 20:30:33.846239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:22.020 [2024-11-18 20:30:33.846252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:22.020 [2024-11-18 20:30:33.846264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:22.278 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 327487 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 327487 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 327487 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:23.213 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:23.213 rmmod nvme_tcp 00:29:23.471 rmmod nvme_fabrics 00:29:23.471 rmmod nvme_keyring 00:29:23.471 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:23.471 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:23.471 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:23.471 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 327336 ']' 00:29:23.471 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 327336 00:29:23.471 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 327336 ']' 00:29:23.471 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 327336 00:29:23.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (327336) - No such process 00:29:23.471 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 327336 is not found' 00:29:23.471 Process with pid 327336 is not found 00:29:23.471 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.471 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.471 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.471 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:23.471 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:23.472 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.472 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.472 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.472 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.472 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.472 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.472 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:25.376 00:29:25.376 real 0m7.607s 00:29:25.376 user 0m19.192s 00:29:25.376 sys 0m1.473s 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:25.376 ************************************ 00:29:25.376 END TEST nvmf_shutdown_tc3 00:29:25.376 ************************************ 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:25.376 ************************************ 00:29:25.376 START TEST nvmf_shutdown_tc4 00:29:25.376 ************************************ 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.376 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:25.636 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:25.636 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.636 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:25.637 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:25.637 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:29:25.637 00:29:25.637 --- 10.0.0.2 ping statistics --- 00:29:25.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.637 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:29:25.637 00:29:25.637 --- 10.0.0.1 ping statistics --- 00:29:25.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.637 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=328395 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 328395 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 328395 ']' 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.637 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:25.896 [2024-11-18 20:30:37.645062] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:25.896 [2024-11-18 20:30:37.645143] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.896 [2024-11-18 20:30:37.719793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.896 [2024-11-18 20:30:37.768735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.896 [2024-11-18 20:30:37.768788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.896 [2024-11-18 20:30:37.768801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.896 [2024-11-18 20:30:37.768813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.896 [2024-11-18 20:30:37.768823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.896 [2024-11-18 20:30:37.770247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.896 [2024-11-18 20:30:37.770313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.896 [2024-11-18 20:30:37.770379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:25.896 [2024-11-18 20:30:37.770381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.896 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.896 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:25.896 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:25.896 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:25.896 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:25.896 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.896 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:25.896 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:26.154 [2024-11-18 20:30:37.905543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.154 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:26.154 Malloc1 00:29:26.154 [2024-11-18 20:30:37.989478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.154 Malloc2 00:29:26.154 Malloc3 00:29:26.154 Malloc4 00:29:26.154 Malloc5 00:29:26.412 Malloc6 00:29:26.412 Malloc7 00:29:26.412 Malloc8 00:29:26.412 Malloc9 00:29:26.412 Malloc10 00:29:26.412 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.412 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:26.412 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:26.412 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:26.671 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=328575 00:29:26.671 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:26.671 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:26.671 [2024-11-18 20:30:38.491915] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:31.946 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:31.946 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 328395 00:29:31.946 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 328395 ']' 00:29:31.946 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 328395 00:29:31.946 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:31.946 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.946 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 328395 00:29:31.946 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:31.946 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:31.946 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 328395' 00:29:31.946 killing process with pid 328395 00:29:31.946 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 328395 00:29:31.946 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 328395 00:29:31.946 [2024-11-18 20:30:43.483070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60f80 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.483165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60f80 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.483182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60f80 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.483196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60f80 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.483209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60f80 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.483222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60f80 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.483235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60f80 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.483248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60f80 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.483260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60f80 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.484331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61450 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.484408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61450 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.484447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61450 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.484463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61450 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.484480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61450 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.484498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61450 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.484513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61450 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.484526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61450 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.484539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61450 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.484552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61450 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.485645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61920 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.485688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61920 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.485712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61920 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.485728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61920 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.485742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61920 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.485756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61920 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.485768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61920 is same with the state(6) to be set 00:29:31.946 [2024-11-18 20:30:43.485781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61920 is same with the state(6) to be set 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 starting I/O failed: -6 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 starting I/O failed: -6 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 starting I/O failed: -6 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 starting I/O failed: -6 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 starting I/O failed: -6 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 starting I/O failed: -6 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 starting I/O failed: -6 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 starting I/O failed: -6 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 [2024-11-18 20:30:43.490943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d58010 is same with Write completed with error (sct=0, sc=8) 00:29:31.946 the state(6) to be set 00:29:31.946 starting I/O failed: -6 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 [2024-11-18 20:30:43.491005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d58010 is same with the state(6) to be set 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 [2024-11-18 20:30:43.491031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d58010 is same with the state(6) to be set 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 starting I/O failed: -6 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 [2024-11-18 20:30:43.491147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 starting I/O failed: -6 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 starting I/O failed: -6 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 starting I/O failed: -6 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.946 starting I/O failed: -6 00:29:31.946 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 [2024-11-18 20:30:43.491719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37060 is same with starting I/O failed: -6 00:29:31.947 the state(6) to be set 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 [2024-11-18 20:30:43.491751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37060 is same with the state(6) to be set 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 [2024-11-18 20:30:43.491768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37060 is same with the state(6) to be set 00:29:31.947 [2024-11-18 20:30:43.491782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37060 is same with the state(6) to be set 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 [2024-11-18 20:30:43.491794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37060 is same with the state(6) to be set 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 [2024-11-18 20:30:43.491808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37060 is same with starting I/O failed: -6 00:29:31.947 the state(6) to be set 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 [2024-11-18 20:30:43.492218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37530 is same with starting I/O failed: -6 00:29:31.947 the state(6) to be set 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 [2024-11-18 20:30:43.492260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37530 is same with the state(6) to be set 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 [2024-11-18 20:30:43.492277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37530 is same with the state(6) to be set 00:29:31.947 [2024-11-18 20:30:43.492291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37530 is same with the state(6) to be set 00:29:31.947 [2024-11-18 20:30:43.492304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37530 is same with the state(6) to be set 00:29:31.947 [2024-11-18 20:30:43.492307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.947 [2024-11-18 20:30:43.492317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37530 is same with the state(6) to be set 00:29:31.947 starting I/O failed: -6 00:29:31.947 starting I/O failed: -6 00:29:31.947 starting I/O failed: -6 00:29:31.947 starting I/O failed: -6 00:29:31.947 [2024-11-18 20:30:43.492709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37a00 is same with the state(6) to be set 00:29:31.947 starting I/O failed: -6 00:29:31.947 [2024-11-18 20:30:43.492743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37a00 is same with the state(6) to be set 00:29:31.947 [2024-11-18 20:30:43.492763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37a00 is same with the state(6) to be set 00:29:31.947 [2024-11-18 20:30:43.492777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37a00 is same with the state(6) to be set 00:29:31.947 starting I/O failed: -6 00:29:31.947 [2024-11-18 20:30:43.492791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37a00 is same with the state(6) to be set 00:29:31.947 [2024-11-18 20:30:43.492807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f37a00 is same with the state(6) to be set 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 [2024-11-18 20:30:43.493252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36b90 is same with the state(6) to be set 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 [2024-11-18 20:30:43.493280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36b90 is same with the state(6) to be set 00:29:31.947 starting I/O failed: -6 00:29:31.947 [2024-11-18 20:30:43.493301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36b90 is same with Write completed with error (sct=0, sc=8) 00:29:31.947 the state(6) to be set 00:29:31.947 starting I/O failed: -6 00:29:31.947 [2024-11-18 20:30:43.493316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36b90 is same with the state(6) to be set 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 starting I/O failed: -6 00:29:31.947 [2024-11-18 20:30:43.493335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36b90 is same with the state(6) to be set 00:29:31.947 Write completed with error (sct=0, sc=8) 00:29:31.947 [2024-11-18 20:30:43.493348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36b90 is same with the state(6) to be set 00:29:31.948 [2024-11-18 20:30:43.493360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36b90 is same with Write completed with error (sct=0, sc=8) 00:29:31.948 the state(6) to be set 00:29:31.948 starting I/O failed: -6 00:29:31.948 [2024-11-18 20:30:43.493374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36b90 is same with the state(6) to be set 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 [2024-11-18 20:30:43.493742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 [2024-11-18 20:30:43.494544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57160 is same with starting I/O failed: -6 00:29:31.948 the state(6) to be set 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 [2024-11-18 20:30:43.494572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57160 is same with the state(6) to be set 00:29:31.948 starting I/O failed: -6 00:29:31.948 [2024-11-18 20:30:43.494596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57160 is same with Write completed with error (sct=0, sc=8) 00:29:31.948 the state(6) to be set 00:29:31.948 starting I/O failed: -6 00:29:31.948 [2024-11-18 20:30:43.494610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57160 is same with the state(6) to be set 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 [2024-11-18 20:30:43.494623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57160 is same with the state(6) to be set 00:29:31.948 starting I/O failed: -6 00:29:31.948 [2024-11-18 20:30:43.494646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57160 is same with the state(6) to be set 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 Write completed with error (sct=0, sc=8) 00:29:31.948 starting I/O failed: -6 00:29:31.948 [2024-11-18 20:30:43.495179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57630 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.495205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57630 is same with the state(6) to be set 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 [2024-11-18 20:30:43.495220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57630 is same with the state(6) to be set 00:29:31.949 starting I/O failed: -6 00:29:31.949 [2024-11-18 20:30:43.495234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57630 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.495254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57630 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.495267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57630 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.495272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ[2024-11-18 20:30:43.495279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57630 is same with transport error -6 (No such device or address) on qpair id 4 00:29:31.949 the state(6) to be set 00:29:31.949 NVMe io qpair process completion error 00:29:31.949 [2024-11-18 20:30:43.495294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57630 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.495306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57630 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.495324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57630 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.495338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57630 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.495907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57b20 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.495950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57b20 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.495965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57b20 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.495980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57b20 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.495992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d57b20 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.496588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d56c90 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.496628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d56c90 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.496654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d56c90 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.496667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d56c90 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.496684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d56c90 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.496697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d56c90 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.496710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d56c90 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.496722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d56c90 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.496735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d56c90 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.496748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d56c90 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.500356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59880 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.500394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59880 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.500410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59880 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.500423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59880 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.500436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59880 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.500449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59880 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.501064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59d50 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.501096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59d50 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.501112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59d50 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.501125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59d50 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.501146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59d50 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.501171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59d50 is same with the state(6) to be set 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 starting I/O failed: -6 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 starting I/O failed: -6 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 [2024-11-18 20:30:43.501852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a220 is same with Write completed with error (sct=0, sc=8) 00:29:31.949 the state(6) to be set 00:29:31.949 starting I/O failed: -6 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 [2024-11-18 20:30:43.501887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a220 is same with the state(6) to be set 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 [2024-11-18 20:30:43.501907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a220 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.501921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a220 is same with the state(6) to be set 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 [2024-11-18 20:30:43.501940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a220 is same with the state(6) to be set 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 starting I/O failed: -6 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 starting I/O failed: -6 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 starting I/O failed: -6 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 starting I/O failed: -6 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 starting I/O failed: -6 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 starting I/O failed: -6 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 starting I/O failed: -6 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 [2024-11-18 20:30:43.502506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.949 starting I/O failed: -6 00:29:31.949 [2024-11-18 20:30:43.502691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d593b0 is same with Write completed with error (sct=0, sc=8) 00:29:31.949 the state(6) to be set 00:29:31.949 starting I/O failed: -6 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 [2024-11-18 20:30:43.502726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d593b0 is same with the state(6) to be set 00:29:31.949 [2024-11-18 20:30:43.502744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d593b0 is same with the state(6) to be set 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 [2024-11-18 20:30:43.502757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d593b0 is same with the state(6) to be set 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.949 [2024-11-18 20:30:43.502770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d593b0 is same with starting I/O failed: -6 00:29:31.949 the state(6) to be set 00:29:31.949 Write completed with error (sct=0, sc=8) 00:29:31.950 [2024-11-18 20:30:43.502790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d593b0 is same with the state(6) to be set 00:29:31.950 starting I/O failed: -6 00:29:31.950 [2024-11-18 20:30:43.502805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d593b0 is same with the state(6) to be set 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 [2024-11-18 20:30:43.502818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d593b0 is same with the state(6) to be set 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 [2024-11-18 20:30:43.502831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d593b0 is same with the state(6) to be set 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 [2024-11-18 20:30:43.503611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 [2024-11-18 20:30:43.504752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.950 Write completed with error (sct=0, sc=8) 00:29:31.950 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 [2024-11-18 20:30:43.505261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5d7e0 is same with the state(6) to be set 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 [2024-11-18 20:30:43.505290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5d7e0 is same with the state(6) to be set 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 [2024-11-18 20:30:43.505304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5d7e0 is same with the state(6) to be set 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 [2024-11-18 20:30:43.505318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5d7e0 is same with the state(6) to be set 00:29:31.951 starting I/O failed: -6 00:29:31.951 [2024-11-18 20:30:43.505330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5d7e0 is same with Write completed with error (sct=0, sc=8) 00:29:31.951 the state(6) to be set 00:29:31.951 starting I/O failed: -6 00:29:31.951 [2024-11-18 20:30:43.505358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5d7e0 is same with the state(6) to be set 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 [2024-11-18 20:30:43.505372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5d7e0 is same with the state(6) to be set 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 [2024-11-18 20:30:43.505385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5d7e0 is same with the state(6) to be set 00:29:31.951 starting I/O failed: -6 00:29:31.951 [2024-11-18 20:30:43.505397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5d7e0 is same with the state(6) to be set 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 [2024-11-18 20:30:43.505410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5d7e0 is same with starting I/O failed: -6 00:29:31.951 the state(6) to be set 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 [2024-11-18 20:30:43.505926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5dcd0 is same with the state(6) to be set 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 [2024-11-18 20:30:43.505956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5dcd0 is same with the state(6) to be set 00:29:31.951 [2024-11-18 20:30:43.505973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5dcd0 is same with the state(6) to be set 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 starting I/O failed: -6 00:29:31.951 [2024-11-18 20:30:43.505986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5dcd0 is same with the state(6) to be set 00:29:31.951 Write completed with error (sct=0, sc=8) 00:29:31.951 [2024-11-18 20:30:43.505999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5dcd0 is same with the state(6) to be set 00:29:31.951 starting I/O failed: -6 00:29:31.951 [2024-11-18 20:30:43.506011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5dcd0 is same with the state(6) to be set 00:29:31.951 [2024-11-18 20:30:43.506023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5dcd0 is same with the state(6) to be set 00:29:31.951 [2024-11-18 20:30:43.506035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5dcd0 is same with the state(6) to be set 00:29:31.951 [2024-11-18 20:30:43.506053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5dcd0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.506066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5dcd0 is same with the state(6) to be set 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 [2024-11-18 20:30:43.506330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.952 NVMe io qpair process completion error 00:29:31.952 [2024-11-18 20:30:43.506402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5e1c0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.506433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5e1c0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.506448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5e1c0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.506461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5e1c0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.506473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5e1c0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.506486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5e1c0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.506499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5e1c0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.506511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5e1c0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.506523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5e1c0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.507665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60110 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.507693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60110 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.507717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60110 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.508318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f605e0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.508346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f605e0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.508360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f605e0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.508374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f605e0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.508387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f605e0 is same with the state(6) to be set 00:29:31.952 [2024-11-18 20:30:43.508400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f605e0 is same with the state(6) to be set 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.952 Write completed with error (sct=0, sc=8) 00:29:31.952 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.953 Write completed with error (sct=0, sc=8) 00:29:31.953 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 [2024-11-18 20:30:43.512784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.954 NVMe io qpair process completion error 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 [2024-11-18 20:30:43.513978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 [2024-11-18 20:30:43.515061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.954 Write completed with error (sct=0, sc=8) 00:29:31.954 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 [2024-11-18 20:30:43.516234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.955 starting I/O failed: -6 00:29:31.955 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 [2024-11-18 20:30:43.517890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.956 NVMe io qpair process completion error 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 [2024-11-18 20:30:43.519324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.956 starting I/O failed: -6 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 [2024-11-18 20:30:43.520299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.956 starting I/O failed: -6 00:29:31.956 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 [2024-11-18 20:30:43.521421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 [2024-11-18 20:30:43.523568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.957 NVMe io qpair process completion error 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 starting I/O failed: -6 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.957 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 [2024-11-18 20:30:43.524965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 [2024-11-18 20:30:43.526068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 starting I/O failed: -6 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.958 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 [2024-11-18 20:30:43.527155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 [2024-11-18 20:30:43.530540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.959 NVMe io qpair process completion error 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.959 starting I/O failed: -6 00:29:31.959 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 [2024-11-18 20:30:43.531885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 [2024-11-18 20:30:43.532996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.960 starting I/O failed: -6 00:29:31.960 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 [2024-11-18 20:30:43.534122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 [2024-11-18 20:30:43.538573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.961 NVMe io qpair process completion error 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 Write completed with error (sct=0, sc=8) 00:29:31.961 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 [2024-11-18 20:30:43.540024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 [2024-11-18 20:30:43.541000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.962 starting I/O failed: -6 00:29:31.962 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 [2024-11-18 20:30:43.542176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 [2024-11-18 20:30:43.544335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.963 NVMe io qpair process completion error 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 starting I/O failed: -6 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.963 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 [2024-11-18 20:30:43.545664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 [2024-11-18 20:30:43.546747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 starting I/O failed: -6 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.964 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 [2024-11-18 20:30:43.547877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 [2024-11-18 20:30:43.549957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.965 NVMe io qpair process completion error 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 starting I/O failed: -6 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.965 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 [2024-11-18 20:30:43.551196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 [2024-11-18 20:30:43.552275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 Write completed with error (sct=0, sc=8) 00:29:31.966 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 [2024-11-18 20:30:43.553395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 Write completed with error (sct=0, sc=8) 00:29:31.967 starting I/O failed: -6 00:29:31.967 [2024-11-18 20:30:43.557319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.967 NVMe io qpair process completion error 00:29:31.967 Initializing NVMe Controllers 00:29:31.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:31.967 Controller IO queue size 128, less than required. 00:29:31.967 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:31.967 Controller IO queue size 128, less than required. 00:29:31.967 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:31.967 Controller IO queue size 128, less than required. 00:29:31.967 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:31.967 Controller IO queue size 128, less than required. 00:29:31.967 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:31.967 Controller IO queue size 128, less than required. 00:29:31.967 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:31.967 Controller IO queue size 128, less than required. 00:29:31.967 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:31.967 Controller IO queue size 128, less than required. 00:29:31.967 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:31.967 Controller IO queue size 128, less than required. 00:29:31.967 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:31.967 Controller IO queue size 128, less than required. 00:29:31.967 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:31.968 Controller IO queue size 128, less than required. 00:29:31.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:31.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:31.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:31.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:31.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:31.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:31.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:31.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:31.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:31.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:31.968 Initialization complete. Launching workers. 00:29:31.968 ======================================================== 00:29:31.968 Latency(us) 00:29:31.968 Device Information : IOPS MiB/s Average min max 00:29:31.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1811.48 77.84 70683.30 808.53 134986.38 00:29:31.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1810.62 77.80 69919.34 995.39 123532.46 00:29:31.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1783.42 76.63 71766.44 1155.86 135229.32 00:29:31.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1799.82 77.34 71137.17 969.70 137607.16 00:29:31.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1820.33 78.22 69541.13 1147.62 118357.90 00:29:31.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1827.89 78.54 69297.07 603.40 117513.46 00:29:31.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1795.51 77.15 70565.80 1080.49 120408.21 00:29:31.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1844.94 79.27 68700.44 827.98 122681.33 00:29:31.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1821.63 78.27 69609.62 942.28 125457.57 00:29:31.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1809.11 77.74 70142.07 905.78 129510.82 00:29:31.968 ======================================================== 00:29:31.968 Total : 18124.74 778.80 70128.71 603.40 137607.16 00:29:31.968 00:29:31.968 [2024-11-18 20:30:43.563902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0040 is same with the state(6) to be set 00:29:31.968 [2024-11-18 20:30:43.564013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6240 is same with the state(6) to be set 00:29:31.968 [2024-11-18 20:30:43.564071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c9e40 is same with the state(6) to be set 00:29:31.968 [2024-11-18 20:30:43.564127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d3c40 is same with the state(6) to be set 00:29:31.968 [2024-11-18 20:30:43.564185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4f40 is same with the state(6) to be set 00:29:31.968 [2024-11-18 20:30:43.564243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b40 is same with the state(6) to be set 00:29:31.968 [2024-11-18 20:30:43.564299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1330 is same with the state(6) to be set 00:29:31.968 [2024-11-18 20:30:43.564354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dda40 is same with the state(6) to be set 00:29:31.968 [2024-11-18 20:30:43.564410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ced40 is same with the state(6) to be set 00:29:31.968 [2024-11-18 20:30:43.564467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bb140 is same with the state(6) to be set 00:29:31.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:31.968 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:33.347 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 328575 00:29:33.347 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:33.347 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 328575 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 328575 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:33.348 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:33.348 rmmod nvme_tcp 00:29:33.348 rmmod nvme_fabrics 00:29:33.348 rmmod nvme_keyring 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 328395 ']' 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 328395 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 328395 ']' 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 328395 00:29:33.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (328395) - No such process 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 328395 is not found' 00:29:33.348 Process with pid 328395 is not found 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:33.348 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.249 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:35.249 00:29:35.249 real 0m9.688s 00:29:35.249 user 0m23.770s 00:29:35.249 sys 0m5.385s 00:29:35.249 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.249 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:35.249 ************************************ 00:29:35.249 END TEST nvmf_shutdown_tc4 00:29:35.249 ************************************ 00:29:35.249 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:35.249 00:29:35.249 real 0m36.940s 00:29:35.249 user 1m39.248s 00:29:35.249 sys 0m11.780s 00:29:35.249 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.249 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:35.249 ************************************ 00:29:35.249 END TEST nvmf_shutdown 00:29:35.249 ************************************ 00:29:35.249 20:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:35.249 20:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:35.249 20:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.249 20:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:35.249 ************************************ 00:29:35.249 START TEST nvmf_nsid 00:29:35.249 ************************************ 00:29:35.249 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:35.249 * Looking for test storage... 00:29:35.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:35.249 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:35.249 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:29:35.249 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:35.508 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:35.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.509 --rc genhtml_branch_coverage=1 00:29:35.509 --rc genhtml_function_coverage=1 00:29:35.509 --rc genhtml_legend=1 00:29:35.509 --rc geninfo_all_blocks=1 00:29:35.509 --rc geninfo_unexecuted_blocks=1 00:29:35.509 00:29:35.509 ' 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:35.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.509 --rc genhtml_branch_coverage=1 00:29:35.509 --rc genhtml_function_coverage=1 00:29:35.509 --rc genhtml_legend=1 00:29:35.509 --rc geninfo_all_blocks=1 00:29:35.509 --rc geninfo_unexecuted_blocks=1 00:29:35.509 00:29:35.509 ' 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:35.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.509 --rc genhtml_branch_coverage=1 00:29:35.509 --rc genhtml_function_coverage=1 00:29:35.509 --rc genhtml_legend=1 00:29:35.509 --rc geninfo_all_blocks=1 00:29:35.509 --rc geninfo_unexecuted_blocks=1 00:29:35.509 00:29:35.509 ' 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:35.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.509 --rc genhtml_branch_coverage=1 00:29:35.509 --rc genhtml_function_coverage=1 00:29:35.509 --rc genhtml_legend=1 00:29:35.509 --rc geninfo_all_blocks=1 00:29:35.509 --rc geninfo_unexecuted_blocks=1 00:29:35.509 00:29:35.509 ' 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:35.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:35.509 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:35.510 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.430 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:37.431 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:37.431 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.431 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:37.432 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:37.432 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:37.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.433 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:37.434 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.434 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.434 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:37.434 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:37.434 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.434 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.434 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:37.434 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:37.434 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.434 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.711 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.711 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.711 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:37.711 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.711 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.711 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.711 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:37.711 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:37.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:29:37.711 00:29:37.711 --- 10.0.0.2 ping statistics --- 00:29:37.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.711 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:29:37.711 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:29:37.711 00:29:37.711 --- 10.0.0.1 ping statistics --- 00:29:37.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.711 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=331313 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 331313 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 331313 ']' 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.712 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:37.712 [2024-11-18 20:30:49.614839] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:37.712 [2024-11-18 20:30:49.614933] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.712 [2024-11-18 20:30:49.686164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.971 [2024-11-18 20:30:49.731687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.971 [2024-11-18 20:30:49.731739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.971 [2024-11-18 20:30:49.731763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.971 [2024-11-18 20:30:49.731775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.971 [2024-11-18 20:30:49.731785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.971 [2024-11-18 20:30:49.732363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=331333 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=da975f3d-edd3-4b68-ac8b-ee6ad020e28f 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=22a493cf-73b5-4e16-ac3e-f487f00ce433 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=fc31f8f4-ec61-4a37-aff5-ec9c25ee8e72 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.971 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:37.971 null0 00:29:37.971 null1 00:29:37.972 null2 00:29:37.972 [2024-11-18 20:30:49.910435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.972 [2024-11-18 20:30:49.921901] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:37.972 [2024-11-18 20:30:49.921989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331333 ] 00:29:37.972 [2024-11-18 20:30:49.934656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.972 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.972 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 331333 /var/tmp/tgt2.sock 00:29:37.972 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 331333 ']' 00:29:37.972 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:37.972 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.972 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:37.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:37.972 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.972 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:38.231 [2024-11-18 20:30:49.990155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.231 [2024-11-18 20:30:50.041815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.490 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.490 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:38.490 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:38.750 [2024-11-18 20:30:50.710089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.751 [2024-11-18 20:30:50.726270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:38.751 nvme0n1 nvme0n2 00:29:38.751 nvme1n1 00:29:39.011 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:39.011 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:39.011 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.581 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:39.581 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:39.581 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:39.581 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:39.581 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:39.581 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:39.581 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:39.581 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:39.581 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:39.581 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:39.581 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:39.581 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:39.581 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid da975f3d-edd3-4b68-ac8b-ee6ad020e28f 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=da975f3dedd34b68ac8bee6ad020e28f 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DA975F3DEDD34B68AC8BEE6AD020E28F 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ DA975F3DEDD34B68AC8BEE6AD020E28F == \D\A\9\7\5\F\3\D\E\D\D\3\4\B\6\8\A\C\8\B\E\E\6\A\D\0\2\0\E\2\8\F ]] 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 22a493cf-73b5-4e16-ac3e-f487f00ce433 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=22a493cf73b54e16ac3ef487f00ce433 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 22A493CF73B54E16AC3EF487F00CE433 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 22A493CF73B54E16AC3EF487F00CE433 == \2\2\A\4\9\3\C\F\7\3\B\5\4\E\1\6\A\C\3\E\F\4\8\7\F\0\0\C\E\4\3\3 ]] 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid fc31f8f4-ec61-4a37-aff5-ec9c25ee8e72 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:40.515 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:40.516 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:40.516 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:40.516 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fc31f8f4ec614a37aff5ec9c25ee8e72 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FC31F8F4EC614A37AFF5EC9C25EE8E72 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ FC31F8F4EC614A37AFF5EC9C25EE8E72 == \F\C\3\1\F\8\F\4\E\C\6\1\4\A\3\7\A\F\F\5\E\C\9\C\2\5\E\E\8\E\7\2 ]] 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 331333 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 331333 ']' 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 331333 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 331333 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 331333' 00:29:40.775 killing process with pid 331333 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 331333 00:29:40.775 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 331333 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:41.345 rmmod nvme_tcp 00:29:41.345 rmmod nvme_fabrics 00:29:41.345 rmmod nvme_keyring 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 331313 ']' 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 331313 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 331313 ']' 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 331313 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 331313 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 331313' 00:29:41.345 killing process with pid 331313 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 331313 00:29:41.345 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 331313 00:29:41.605 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:41.605 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:41.605 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:41.605 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:41.605 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:41.605 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:41.605 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:41.605 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:41.605 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:41.605 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.605 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.605 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.511 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:43.511 00:29:43.511 real 0m8.346s 00:29:43.511 user 0m8.228s 00:29:43.511 sys 0m2.655s 00:29:43.511 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:43.511 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:43.511 ************************************ 00:29:43.511 END TEST nvmf_nsid 00:29:43.511 ************************************ 00:29:43.511 20:30:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:43.511 00:29:43.511 real 18m18.017s 00:29:43.511 user 50m58.286s 00:29:43.511 sys 3m59.611s 00:29:43.511 20:30:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:43.511 20:30:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:43.511 ************************************ 00:29:43.511 END TEST nvmf_target_extra 00:29:43.511 ************************************ 00:29:43.769 20:30:55 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:43.769 20:30:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:43.769 20:30:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:43.769 20:30:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:43.769 ************************************ 00:29:43.769 START TEST nvmf_host 00:29:43.769 ************************************ 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:43.769 * Looking for test storage... 00:29:43.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:43.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.769 --rc genhtml_branch_coverage=1 00:29:43.769 --rc genhtml_function_coverage=1 00:29:43.769 --rc genhtml_legend=1 00:29:43.769 --rc geninfo_all_blocks=1 00:29:43.769 --rc geninfo_unexecuted_blocks=1 00:29:43.769 00:29:43.769 ' 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:43.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.769 --rc genhtml_branch_coverage=1 00:29:43.769 --rc genhtml_function_coverage=1 00:29:43.769 --rc genhtml_legend=1 00:29:43.769 --rc geninfo_all_blocks=1 00:29:43.769 --rc geninfo_unexecuted_blocks=1 00:29:43.769 00:29:43.769 ' 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:43.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.769 --rc genhtml_branch_coverage=1 00:29:43.769 --rc genhtml_function_coverage=1 00:29:43.769 --rc genhtml_legend=1 00:29:43.769 --rc geninfo_all_blocks=1 00:29:43.769 --rc geninfo_unexecuted_blocks=1 00:29:43.769 00:29:43.769 ' 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:43.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.769 --rc genhtml_branch_coverage=1 00:29:43.769 --rc genhtml_function_coverage=1 00:29:43.769 --rc genhtml_legend=1 00:29:43.769 --rc geninfo_all_blocks=1 00:29:43.769 --rc geninfo_unexecuted_blocks=1 00:29:43.769 00:29:43.769 ' 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.769 20:30:55 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:43.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.770 ************************************ 00:29:43.770 START TEST nvmf_multicontroller 00:29:43.770 ************************************ 00:29:43.770 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:44.029 * Looking for test storage... 00:29:44.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:44.029 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:44.029 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:44.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.030 --rc genhtml_branch_coverage=1 00:29:44.030 --rc genhtml_function_coverage=1 00:29:44.030 --rc genhtml_legend=1 00:29:44.030 --rc geninfo_all_blocks=1 00:29:44.030 --rc geninfo_unexecuted_blocks=1 00:29:44.030 00:29:44.030 ' 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:44.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.030 --rc genhtml_branch_coverage=1 00:29:44.030 --rc genhtml_function_coverage=1 00:29:44.030 --rc genhtml_legend=1 00:29:44.030 --rc geninfo_all_blocks=1 00:29:44.030 --rc geninfo_unexecuted_blocks=1 00:29:44.030 00:29:44.030 ' 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:44.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.030 --rc genhtml_branch_coverage=1 00:29:44.030 --rc genhtml_function_coverage=1 00:29:44.030 --rc genhtml_legend=1 00:29:44.030 --rc geninfo_all_blocks=1 00:29:44.030 --rc geninfo_unexecuted_blocks=1 00:29:44.030 00:29:44.030 ' 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:44.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.030 --rc genhtml_branch_coverage=1 00:29:44.030 --rc genhtml_function_coverage=1 00:29:44.030 --rc genhtml_legend=1 00:29:44.030 --rc geninfo_all_blocks=1 00:29:44.030 --rc geninfo_unexecuted_blocks=1 00:29:44.030 00:29:44.030 ' 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:44.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:44.030 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:44.031 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:44.031 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:44.031 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:44.031 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.031 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:44.031 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:44.031 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:44.031 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.031 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.031 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.031 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:44.031 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:44.031 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.031 20:30:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:45.932 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:45.932 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:45.932 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.932 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:45.933 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.933 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:46.194 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:46.194 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.194 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:46.194 20:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:46.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:29:46.194 00:29:46.194 --- 10.0.0.2 ping statistics --- 00:29:46.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.194 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:29:46.194 00:29:46.194 --- 10.0.0.1 ping statistics --- 00:29:46.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.194 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=333766 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 333766 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 333766 ']' 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.194 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.194 [2024-11-18 20:30:58.134992] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:46.194 [2024-11-18 20:30:58.135068] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.453 [2024-11-18 20:30:58.207824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:46.453 [2024-11-18 20:30:58.256589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.453 [2024-11-18 20:30:58.256660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.453 [2024-11-18 20:30:58.256681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.453 [2024-11-18 20:30:58.256693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.453 [2024-11-18 20:30:58.256703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.453 [2024-11-18 20:30:58.258240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.453 [2024-11-18 20:30:58.258308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:46.453 [2024-11-18 20:30:58.258311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.453 [2024-11-18 20:30:58.403580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.453 Malloc0 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.453 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.711 [2024-11-18 20:30:58.464835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.711 [2024-11-18 20:30:58.472726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.711 Malloc1 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=333917 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 333917 /var/tmp/bdevperf.sock 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 333917 ']' 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:46.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:46.711 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.712 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.971 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:46.971 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:46.971 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:46.971 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.971 20:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:47.231 NVMe0n1 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.231 1 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:47.231 request: 00:29:47.231 { 00:29:47.231 "name": "NVMe0", 00:29:47.231 "trtype": "tcp", 00:29:47.231 "traddr": "10.0.0.2", 00:29:47.231 "adrfam": "ipv4", 00:29:47.231 "trsvcid": "4420", 00:29:47.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:47.231 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:47.231 "hostaddr": "10.0.0.1", 00:29:47.231 "prchk_reftag": false, 00:29:47.231 "prchk_guard": false, 00:29:47.231 "hdgst": false, 00:29:47.231 "ddgst": false, 00:29:47.231 "allow_unrecognized_csi": false, 00:29:47.231 "method": "bdev_nvme_attach_controller", 00:29:47.231 "req_id": 1 00:29:47.231 } 00:29:47.231 Got JSON-RPC error response 00:29:47.231 response: 00:29:47.231 { 00:29:47.231 "code": -114, 00:29:47.231 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:47.231 } 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:47.231 request: 00:29:47.231 { 00:29:47.231 "name": "NVMe0", 00:29:47.231 "trtype": "tcp", 00:29:47.231 "traddr": "10.0.0.2", 00:29:47.231 "adrfam": "ipv4", 00:29:47.231 "trsvcid": "4420", 00:29:47.231 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:47.231 "hostaddr": "10.0.0.1", 00:29:47.231 "prchk_reftag": false, 00:29:47.231 "prchk_guard": false, 00:29:47.231 "hdgst": false, 00:29:47.231 "ddgst": false, 00:29:47.231 "allow_unrecognized_csi": false, 00:29:47.231 "method": "bdev_nvme_attach_controller", 00:29:47.231 "req_id": 1 00:29:47.231 } 00:29:47.231 Got JSON-RPC error response 00:29:47.231 response: 00:29:47.231 { 00:29:47.231 "code": -114, 00:29:47.231 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:47.231 } 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.231 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:47.231 request: 00:29:47.231 { 00:29:47.231 "name": "NVMe0", 00:29:47.231 "trtype": "tcp", 00:29:47.231 "traddr": "10.0.0.2", 00:29:47.231 "adrfam": "ipv4", 00:29:47.231 "trsvcid": "4420", 00:29:47.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:47.231 "hostaddr": "10.0.0.1", 00:29:47.232 "prchk_reftag": false, 00:29:47.232 "prchk_guard": false, 00:29:47.232 "hdgst": false, 00:29:47.232 "ddgst": false, 00:29:47.232 "multipath": "disable", 00:29:47.232 "allow_unrecognized_csi": false, 00:29:47.232 "method": "bdev_nvme_attach_controller", 00:29:47.232 "req_id": 1 00:29:47.232 } 00:29:47.232 Got JSON-RPC error response 00:29:47.232 response: 00:29:47.232 { 00:29:47.232 "code": -114, 00:29:47.232 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:47.232 } 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:47.232 request: 00:29:47.232 { 00:29:47.232 "name": "NVMe0", 00:29:47.232 "trtype": "tcp", 00:29:47.232 "traddr": "10.0.0.2", 00:29:47.232 "adrfam": "ipv4", 00:29:47.232 "trsvcid": "4420", 00:29:47.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:47.232 "hostaddr": "10.0.0.1", 00:29:47.232 "prchk_reftag": false, 00:29:47.232 "prchk_guard": false, 00:29:47.232 "hdgst": false, 00:29:47.232 "ddgst": false, 00:29:47.232 "multipath": "failover", 00:29:47.232 "allow_unrecognized_csi": false, 00:29:47.232 "method": "bdev_nvme_attach_controller", 00:29:47.232 "req_id": 1 00:29:47.232 } 00:29:47.232 Got JSON-RPC error response 00:29:47.232 response: 00:29:47.232 { 00:29:47.232 "code": -114, 00:29:47.232 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:47.232 } 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.232 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:47.489 NVMe0n1 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:47.489 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:47.489 20:30:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:48.866 { 00:29:48.866 "results": [ 00:29:48.866 { 00:29:48.866 "job": "NVMe0n1", 00:29:48.866 "core_mask": "0x1", 00:29:48.866 "workload": "write", 00:29:48.866 "status": "finished", 00:29:48.866 "queue_depth": 128, 00:29:48.866 "io_size": 4096, 00:29:48.866 "runtime": 1.007069, 00:29:48.866 "iops": 18413.83261722881, 00:29:48.866 "mibps": 71.92903366105004, 00:29:48.866 "io_failed": 0, 00:29:48.866 "io_timeout": 0, 00:29:48.866 "avg_latency_us": 6940.526579107149, 00:29:48.866 "min_latency_us": 1929.671111111111, 00:29:48.866 "max_latency_us": 12427.567407407407 00:29:48.866 } 00:29:48.866 ], 00:29:48.866 "core_count": 1 00:29:48.866 } 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 333917 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 333917 ']' 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 333917 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333917 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333917' 00:29:48.866 killing process with pid 333917 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 333917 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 333917 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:48.866 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:48.866 [2024-11-18 20:30:58.574121] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:48.866 [2024-11-18 20:30:58.574221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333917 ] 00:29:48.866 [2024-11-18 20:30:58.641509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.866 [2024-11-18 20:30:58.687653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.866 [2024-11-18 20:30:59.413418] bdev.c:4686:bdev_name_add: *ERROR*: Bdev name b54596ea-d06e-4dba-afd3-93556d97d22c already exists 00:29:48.866 [2024-11-18 20:30:59.413458] bdev.c:7824:bdev_register: *ERROR*: Unable to add uuid:b54596ea-d06e-4dba-afd3-93556d97d22c alias for bdev NVMe1n1 00:29:48.866 [2024-11-18 20:30:59.413473] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:48.866 Running I/O for 1 seconds... 00:29:48.866 18416.00 IOPS, 71.94 MiB/s 00:29:48.866 Latency(us) 00:29:48.866 [2024-11-18T19:31:00.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.866 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:48.866 NVMe0n1 : 1.01 18413.83 71.93 0.00 0.00 6940.53 1929.67 12427.57 00:29:48.866 [2024-11-18T19:31:00.874Z] =================================================================================================================== 00:29:48.866 [2024-11-18T19:31:00.874Z] Total : 18413.83 71.93 0.00 0.00 6940.53 1929.67 12427.57 00:29:48.866 Received shutdown signal, test time was about 1.000000 seconds 00:29:48.866 00:29:48.866 Latency(us) 00:29:48.866 [2024-11-18T19:31:00.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.866 [2024-11-18T19:31:00.874Z] =================================================================================================================== 00:29:48.866 [2024-11-18T19:31:00.874Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:48.866 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.866 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.866 rmmod nvme_tcp 00:29:49.125 rmmod nvme_fabrics 00:29:49.125 rmmod nvme_keyring 00:29:49.125 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:49.125 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:49.125 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:49.125 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 333766 ']' 00:29:49.125 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 333766 00:29:49.125 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 333766 ']' 00:29:49.125 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 333766 00:29:49.126 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:49.126 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.126 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333766 00:29:49.126 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:49.126 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:49.126 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333766' 00:29:49.126 killing process with pid 333766 00:29:49.126 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 333766 00:29:49.126 20:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 333766 00:29:49.386 20:31:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:49.386 20:31:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:49.386 20:31:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:49.386 20:31:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:49.386 20:31:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:49.386 20:31:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:49.386 20:31:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:49.386 20:31:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:49.386 20:31:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:49.386 20:31:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.386 20:31:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.386 20:31:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.295 20:31:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:51.295 00:29:51.295 real 0m7.513s 00:29:51.295 user 0m12.091s 00:29:51.295 sys 0m2.252s 00:29:51.295 20:31:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.295 20:31:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:51.295 ************************************ 00:29:51.295 END TEST nvmf_multicontroller 00:29:51.295 ************************************ 00:29:51.295 20:31:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:51.295 20:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:51.295 20:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.295 20:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.553 ************************************ 00:29:51.554 START TEST nvmf_aer 00:29:51.554 ************************************ 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:51.554 * Looking for test storage... 00:29:51.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:51.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.554 --rc genhtml_branch_coverage=1 00:29:51.554 --rc genhtml_function_coverage=1 00:29:51.554 --rc genhtml_legend=1 00:29:51.554 --rc geninfo_all_blocks=1 00:29:51.554 --rc geninfo_unexecuted_blocks=1 00:29:51.554 00:29:51.554 ' 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:51.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.554 --rc genhtml_branch_coverage=1 00:29:51.554 --rc genhtml_function_coverage=1 00:29:51.554 --rc genhtml_legend=1 00:29:51.554 --rc geninfo_all_blocks=1 00:29:51.554 --rc geninfo_unexecuted_blocks=1 00:29:51.554 00:29:51.554 ' 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:51.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.554 --rc genhtml_branch_coverage=1 00:29:51.554 --rc genhtml_function_coverage=1 00:29:51.554 --rc genhtml_legend=1 00:29:51.554 --rc geninfo_all_blocks=1 00:29:51.554 --rc geninfo_unexecuted_blocks=1 00:29:51.554 00:29:51.554 ' 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:51.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.554 --rc genhtml_branch_coverage=1 00:29:51.554 --rc genhtml_function_coverage=1 00:29:51.554 --rc genhtml_legend=1 00:29:51.554 --rc geninfo_all_blocks=1 00:29:51.554 --rc geninfo_unexecuted_blocks=1 00:29:51.554 00:29:51.554 ' 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:51.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:51.554 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:51.555 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.555 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.555 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.555 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:51.555 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:51.555 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:51.555 20:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.088 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.088 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:54.088 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:54.088 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:54.088 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:54.089 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:54.089 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:54.089 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:54.089 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:54.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:29:54.089 00:29:54.089 --- 10.0.0.2 ping statistics --- 00:29:54.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.089 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:29:54.089 00:29:54.089 --- 10.0.0.1 ping statistics --- 00:29:54.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.089 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.089 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=336131 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 336131 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 336131 ']' 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.090 20:31:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.090 [2024-11-18 20:31:05.798615] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:54.090 [2024-11-18 20:31:05.798726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.090 [2024-11-18 20:31:05.873243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:54.090 [2024-11-18 20:31:05.919696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.090 [2024-11-18 20:31:05.919750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.090 [2024-11-18 20:31:05.919773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.090 [2024-11-18 20:31:05.919788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.090 [2024-11-18 20:31:05.919799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.090 [2024-11-18 20:31:05.921186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.090 [2024-11-18 20:31:05.921245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.090 [2024-11-18 20:31:05.921267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:54.090 [2024-11-18 20:31:05.921272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.090 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.090 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:54.090 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:54.090 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:54.090 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.090 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.090 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:54.090 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.090 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.090 [2024-11-18 20:31:06.076898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.090 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.090 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:54.090 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.090 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.349 Malloc0 00:29:54.349 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.349 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:54.349 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.349 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.349 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.349 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:54.349 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.349 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.349 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.350 [2024-11-18 20:31:06.137649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.350 [ 00:29:54.350 { 00:29:54.350 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:54.350 "subtype": "Discovery", 00:29:54.350 "listen_addresses": [], 00:29:54.350 "allow_any_host": true, 00:29:54.350 "hosts": [] 00:29:54.350 }, 00:29:54.350 { 00:29:54.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.350 "subtype": "NVMe", 00:29:54.350 "listen_addresses": [ 00:29:54.350 { 00:29:54.350 "trtype": "TCP", 00:29:54.350 "adrfam": "IPv4", 00:29:54.350 "traddr": "10.0.0.2", 00:29:54.350 "trsvcid": "4420" 00:29:54.350 } 00:29:54.350 ], 00:29:54.350 "allow_any_host": true, 00:29:54.350 "hosts": [], 00:29:54.350 "serial_number": "SPDK00000000000001", 00:29:54.350 "model_number": "SPDK bdev Controller", 00:29:54.350 "max_namespaces": 2, 00:29:54.350 "min_cntlid": 1, 00:29:54.350 "max_cntlid": 65519, 00:29:54.350 "namespaces": [ 00:29:54.350 { 00:29:54.350 "nsid": 1, 00:29:54.350 "bdev_name": "Malloc0", 00:29:54.350 "name": "Malloc0", 00:29:54.350 "nguid": "7B6D90ABDFCC4879A560464B96C9DC15", 00:29:54.350 "uuid": "7b6d90ab-dfcc-4879-a560-464b96c9dc15" 00:29:54.350 } 00:29:54.350 ] 00:29:54.350 } 00:29:54.350 ] 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=336156 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:54.350 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:54.609 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.610 Malloc1 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.610 [ 00:29:54.610 { 00:29:54.610 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:54.610 "subtype": "Discovery", 00:29:54.610 "listen_addresses": [], 00:29:54.610 "allow_any_host": true, 00:29:54.610 "hosts": [] 00:29:54.610 }, 00:29:54.610 { 00:29:54.610 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.610 "subtype": "NVMe", 00:29:54.610 "listen_addresses": [ 00:29:54.610 { 00:29:54.610 "trtype": "TCP", 00:29:54.610 "adrfam": "IPv4", 00:29:54.610 "traddr": "10.0.0.2", 00:29:54.610 "trsvcid": "4420" 00:29:54.610 } 00:29:54.610 ], 00:29:54.610 "allow_any_host": true, 00:29:54.610 "hosts": [], 00:29:54.610 "serial_number": "SPDK00000000000001", 00:29:54.610 "model_number": "SPDK bdev Controller", 00:29:54.610 "max_namespaces": 2, 00:29:54.610 "min_cntlid": 1, 00:29:54.610 "max_cntlid": 65519, 00:29:54.610 "namespaces": [ 00:29:54.610 { 00:29:54.610 "nsid": 1, 00:29:54.610 "bdev_name": "Malloc0", 00:29:54.610 "name": "Malloc0", 00:29:54.610 "nguid": "7B6D90ABDFCC4879A560464B96C9DC15", 00:29:54.610 "uuid": "7b6d90ab-dfcc-4879-a560-464b96c9dc15" 00:29:54.610 }, 00:29:54.610 { 00:29:54.610 "nsid": 2, 00:29:54.610 "bdev_name": "Malloc1", 00:29:54.610 "name": "Malloc1", 00:29:54.610 "nguid": "5E6CBE5D173D4C59BC89055E0ADDFC41", 00:29:54.610 "uuid": "5e6cbe5d-173d-4c59-bc89-055e0addfc41" 00:29:54.610 } 00:29:54.610 ] 00:29:54.610 } 00:29:54.610 ] 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 336156 00:29:54.610 Asynchronous Event Request test 00:29:54.610 Attaching to 10.0.0.2 00:29:54.610 Attached to 10.0.0.2 00:29:54.610 Registering asynchronous event callbacks... 00:29:54.610 Starting namespace attribute notice tests for all controllers... 00:29:54.610 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:54.610 aer_cb - Changed Namespace 00:29:54.610 Cleaning up... 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:54.610 rmmod nvme_tcp 00:29:54.610 rmmod nvme_fabrics 00:29:54.610 rmmod nvme_keyring 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 336131 ']' 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 336131 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 336131 ']' 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 336131 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 336131 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 336131' 00:29:54.610 killing process with pid 336131 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 336131 00:29:54.610 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 336131 00:29:54.871 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:54.871 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:54.871 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:54.871 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:54.871 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:54.871 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:54.871 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:54.871 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:54.871 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:54.871 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.871 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.871 20:31:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.408 20:31:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:57.408 00:29:57.408 real 0m5.536s 00:29:57.408 user 0m4.395s 00:29:57.408 sys 0m2.007s 00:29:57.408 20:31:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.408 20:31:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:57.408 ************************************ 00:29:57.408 END TEST nvmf_aer 00:29:57.408 ************************************ 00:29:57.408 20:31:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:57.408 20:31:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:57.408 20:31:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:57.408 20:31:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.408 ************************************ 00:29:57.408 START TEST nvmf_async_init 00:29:57.408 ************************************ 00:29:57.408 20:31:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:57.408 * Looking for test storage... 00:29:57.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.408 20:31:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:57.408 20:31:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:29:57.408 20:31:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:57.408 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:57.408 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.408 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.408 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.408 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.408 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.408 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.408 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.408 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.408 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:57.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.409 --rc genhtml_branch_coverage=1 00:29:57.409 --rc genhtml_function_coverage=1 00:29:57.409 --rc genhtml_legend=1 00:29:57.409 --rc geninfo_all_blocks=1 00:29:57.409 --rc geninfo_unexecuted_blocks=1 00:29:57.409 00:29:57.409 ' 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:57.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.409 --rc genhtml_branch_coverage=1 00:29:57.409 --rc genhtml_function_coverage=1 00:29:57.409 --rc genhtml_legend=1 00:29:57.409 --rc geninfo_all_blocks=1 00:29:57.409 --rc geninfo_unexecuted_blocks=1 00:29:57.409 00:29:57.409 ' 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:57.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.409 --rc genhtml_branch_coverage=1 00:29:57.409 --rc genhtml_function_coverage=1 00:29:57.409 --rc genhtml_legend=1 00:29:57.409 --rc geninfo_all_blocks=1 00:29:57.409 --rc geninfo_unexecuted_blocks=1 00:29:57.409 00:29:57.409 ' 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:57.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.409 --rc genhtml_branch_coverage=1 00:29:57.409 --rc genhtml_function_coverage=1 00:29:57.409 --rc genhtml_legend=1 00:29:57.409 --rc geninfo_all_blocks=1 00:29:57.409 --rc geninfo_unexecuted_blocks=1 00:29:57.409 00:29:57.409 ' 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:57.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f92fe537cbba4f9fa8c1489d84d8ea82 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:57.409 20:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:59.311 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:59.311 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:59.311 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:59.311 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.311 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.312 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.312 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.312 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.312 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.312 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.312 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.312 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.312 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.312 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.312 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.312 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.312 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:29:59.570 00:29:59.570 --- 10.0.0.2 ping statistics --- 00:29:59.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.570 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:29:59.570 00:29:59.570 --- 10.0.0.1 ping statistics --- 00:29:59.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.570 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=338219 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 338219 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 338219 ']' 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.570 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.570 [2024-11-18 20:31:11.446877] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:59.570 [2024-11-18 20:31:11.446950] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.570 [2024-11-18 20:31:11.517303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.570 [2024-11-18 20:31:11.562095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.570 [2024-11-18 20:31:11.562144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.570 [2024-11-18 20:31:11.562172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.570 [2024-11-18 20:31:11.562184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.570 [2024-11-18 20:31:11.562193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.570 [2024-11-18 20:31:11.562756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.830 [2024-11-18 20:31:11.704941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.830 null0 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f92fe537cbba4f9fa8c1489d84d8ea82 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.830 [2024-11-18 20:31:11.744996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.830 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.089 nvme0n1 00:30:00.089 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.089 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:00.089 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.089 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.089 [ 00:30:00.089 { 00:30:00.089 "name": "nvme0n1", 00:30:00.089 "aliases": [ 00:30:00.089 "f92fe537-cbba-4f9f-a8c1-489d84d8ea82" 00:30:00.089 ], 00:30:00.089 "product_name": "NVMe disk", 00:30:00.089 "block_size": 512, 00:30:00.089 "num_blocks": 2097152, 00:30:00.089 "uuid": "f92fe537-cbba-4f9f-a8c1-489d84d8ea82", 00:30:00.089 "numa_id": 0, 00:30:00.089 "assigned_rate_limits": { 00:30:00.089 "rw_ios_per_sec": 0, 00:30:00.089 "rw_mbytes_per_sec": 0, 00:30:00.089 "r_mbytes_per_sec": 0, 00:30:00.089 "w_mbytes_per_sec": 0 00:30:00.089 }, 00:30:00.089 "claimed": false, 00:30:00.089 "zoned": false, 00:30:00.089 "supported_io_types": { 00:30:00.089 "read": true, 00:30:00.089 "write": true, 00:30:00.089 "unmap": false, 00:30:00.089 "flush": true, 00:30:00.089 "reset": true, 00:30:00.089 "nvme_admin": true, 00:30:00.089 "nvme_io": true, 00:30:00.089 "nvme_io_md": false, 00:30:00.089 "write_zeroes": true, 00:30:00.089 "zcopy": false, 00:30:00.089 "get_zone_info": false, 00:30:00.089 "zone_management": false, 00:30:00.089 "zone_append": false, 00:30:00.089 "compare": true, 00:30:00.089 "compare_and_write": true, 00:30:00.089 "abort": true, 00:30:00.089 "seek_hole": false, 00:30:00.089 "seek_data": false, 00:30:00.089 "copy": true, 00:30:00.089 "nvme_iov_md": false 00:30:00.089 }, 00:30:00.089 "memory_domains": [ 00:30:00.089 { 00:30:00.089 "dma_device_id": "system", 00:30:00.089 "dma_device_type": 1 00:30:00.089 } 00:30:00.089 ], 00:30:00.089 "driver_specific": { 00:30:00.089 "nvme": [ 00:30:00.089 { 00:30:00.089 "trid": { 00:30:00.089 "trtype": "TCP", 00:30:00.089 "adrfam": "IPv4", 00:30:00.089 "traddr": "10.0.0.2", 00:30:00.089 "trsvcid": "4420", 00:30:00.089 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:00.089 }, 00:30:00.089 "ctrlr_data": { 00:30:00.089 "cntlid": 1, 00:30:00.089 "vendor_id": "0x8086", 00:30:00.089 "model_number": "SPDK bdev Controller", 00:30:00.089 "serial_number": "00000000000000000000", 00:30:00.089 "firmware_revision": "25.01", 00:30:00.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:00.089 "oacs": { 00:30:00.089 "security": 0, 00:30:00.089 "format": 0, 00:30:00.089 "firmware": 0, 00:30:00.089 "ns_manage": 0 00:30:00.089 }, 00:30:00.089 "multi_ctrlr": true, 00:30:00.089 "ana_reporting": false 00:30:00.089 }, 00:30:00.089 "vs": { 00:30:00.089 "nvme_version": "1.3" 00:30:00.089 }, 00:30:00.089 "ns_data": { 00:30:00.089 "id": 1, 00:30:00.089 "can_share": true 00:30:00.089 } 00:30:00.089 } 00:30:00.089 ], 00:30:00.089 "mp_policy": "active_passive" 00:30:00.089 } 00:30:00.089 } 00:30:00.089 ] 00:30:00.089 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.090 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:00.090 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.090 20:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.090 [2024-11-18 20:31:11.994020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:00.090 [2024-11-18 20:31:11.994092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178c480 (9): Bad file descriptor 00:30:00.349 [2024-11-18 20:31:12.125767] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.349 [ 00:30:00.349 { 00:30:00.349 "name": "nvme0n1", 00:30:00.349 "aliases": [ 00:30:00.349 "f92fe537-cbba-4f9f-a8c1-489d84d8ea82" 00:30:00.349 ], 00:30:00.349 "product_name": "NVMe disk", 00:30:00.349 "block_size": 512, 00:30:00.349 "num_blocks": 2097152, 00:30:00.349 "uuid": "f92fe537-cbba-4f9f-a8c1-489d84d8ea82", 00:30:00.349 "numa_id": 0, 00:30:00.349 "assigned_rate_limits": { 00:30:00.349 "rw_ios_per_sec": 0, 00:30:00.349 "rw_mbytes_per_sec": 0, 00:30:00.349 "r_mbytes_per_sec": 0, 00:30:00.349 "w_mbytes_per_sec": 0 00:30:00.349 }, 00:30:00.349 "claimed": false, 00:30:00.349 "zoned": false, 00:30:00.349 "supported_io_types": { 00:30:00.349 "read": true, 00:30:00.349 "write": true, 00:30:00.349 "unmap": false, 00:30:00.349 "flush": true, 00:30:00.349 "reset": true, 00:30:00.349 "nvme_admin": true, 00:30:00.349 "nvme_io": true, 00:30:00.349 "nvme_io_md": false, 00:30:00.349 "write_zeroes": true, 00:30:00.349 "zcopy": false, 00:30:00.349 "get_zone_info": false, 00:30:00.349 "zone_management": false, 00:30:00.349 "zone_append": false, 00:30:00.349 "compare": true, 00:30:00.349 "compare_and_write": true, 00:30:00.349 "abort": true, 00:30:00.349 "seek_hole": false, 00:30:00.349 "seek_data": false, 00:30:00.349 "copy": true, 00:30:00.349 "nvme_iov_md": false 00:30:00.349 }, 00:30:00.349 "memory_domains": [ 00:30:00.349 { 00:30:00.349 "dma_device_id": "system", 00:30:00.349 "dma_device_type": 1 00:30:00.349 } 00:30:00.349 ], 00:30:00.349 "driver_specific": { 00:30:00.349 "nvme": [ 00:30:00.349 { 00:30:00.349 "trid": { 00:30:00.349 "trtype": "TCP", 00:30:00.349 "adrfam": "IPv4", 00:30:00.349 "traddr": "10.0.0.2", 00:30:00.349 "trsvcid": "4420", 00:30:00.349 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:00.349 }, 00:30:00.349 "ctrlr_data": { 00:30:00.349 "cntlid": 2, 00:30:00.349 "vendor_id": "0x8086", 00:30:00.349 "model_number": "SPDK bdev Controller", 00:30:00.349 "serial_number": "00000000000000000000", 00:30:00.349 "firmware_revision": "25.01", 00:30:00.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:00.349 "oacs": { 00:30:00.349 "security": 0, 00:30:00.349 "format": 0, 00:30:00.349 "firmware": 0, 00:30:00.349 "ns_manage": 0 00:30:00.349 }, 00:30:00.349 "multi_ctrlr": true, 00:30:00.349 "ana_reporting": false 00:30:00.349 }, 00:30:00.349 "vs": { 00:30:00.349 "nvme_version": "1.3" 00:30:00.349 }, 00:30:00.349 "ns_data": { 00:30:00.349 "id": 1, 00:30:00.349 "can_share": true 00:30:00.349 } 00:30:00.349 } 00:30:00.349 ], 00:30:00.349 "mp_policy": "active_passive" 00:30:00.349 } 00:30:00.349 } 00:30:00.349 ] 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.xFKE632jov 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.xFKE632jov 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.xFKE632jov 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.349 [2024-11-18 20:31:12.182648] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:00.349 [2024-11-18 20:31:12.182793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.349 [2024-11-18 20:31:12.198691] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:00.349 nvme0n1 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.349 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.349 [ 00:30:00.349 { 00:30:00.349 "name": "nvme0n1", 00:30:00.349 "aliases": [ 00:30:00.349 "f92fe537-cbba-4f9f-a8c1-489d84d8ea82" 00:30:00.349 ], 00:30:00.349 "product_name": "NVMe disk", 00:30:00.349 "block_size": 512, 00:30:00.349 "num_blocks": 2097152, 00:30:00.350 "uuid": "f92fe537-cbba-4f9f-a8c1-489d84d8ea82", 00:30:00.350 "numa_id": 0, 00:30:00.350 "assigned_rate_limits": { 00:30:00.350 "rw_ios_per_sec": 0, 00:30:00.350 "rw_mbytes_per_sec": 0, 00:30:00.350 "r_mbytes_per_sec": 0, 00:30:00.350 "w_mbytes_per_sec": 0 00:30:00.350 }, 00:30:00.350 "claimed": false, 00:30:00.350 "zoned": false, 00:30:00.350 "supported_io_types": { 00:30:00.350 "read": true, 00:30:00.350 "write": true, 00:30:00.350 "unmap": false, 00:30:00.350 "flush": true, 00:30:00.350 "reset": true, 00:30:00.350 "nvme_admin": true, 00:30:00.350 "nvme_io": true, 00:30:00.350 "nvme_io_md": false, 00:30:00.350 "write_zeroes": true, 00:30:00.350 "zcopy": false, 00:30:00.350 "get_zone_info": false, 00:30:00.350 "zone_management": false, 00:30:00.350 "zone_append": false, 00:30:00.350 "compare": true, 00:30:00.350 "compare_and_write": true, 00:30:00.350 "abort": true, 00:30:00.350 "seek_hole": false, 00:30:00.350 "seek_data": false, 00:30:00.350 "copy": true, 00:30:00.350 "nvme_iov_md": false 00:30:00.350 }, 00:30:00.350 "memory_domains": [ 00:30:00.350 { 00:30:00.350 "dma_device_id": "system", 00:30:00.350 "dma_device_type": 1 00:30:00.350 } 00:30:00.350 ], 00:30:00.350 "driver_specific": { 00:30:00.350 "nvme": [ 00:30:00.350 { 00:30:00.350 "trid": { 00:30:00.350 "trtype": "TCP", 00:30:00.350 "adrfam": "IPv4", 00:30:00.350 "traddr": "10.0.0.2", 00:30:00.350 "trsvcid": "4421", 00:30:00.350 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:00.350 }, 00:30:00.350 "ctrlr_data": { 00:30:00.350 "cntlid": 3, 00:30:00.350 "vendor_id": "0x8086", 00:30:00.350 "model_number": "SPDK bdev Controller", 00:30:00.350 "serial_number": "00000000000000000000", 00:30:00.350 "firmware_revision": "25.01", 00:30:00.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:00.350 "oacs": { 00:30:00.350 "security": 0, 00:30:00.350 "format": 0, 00:30:00.350 "firmware": 0, 00:30:00.350 "ns_manage": 0 00:30:00.350 }, 00:30:00.350 "multi_ctrlr": true, 00:30:00.350 "ana_reporting": false 00:30:00.350 }, 00:30:00.350 "vs": { 00:30:00.350 "nvme_version": "1.3" 00:30:00.350 }, 00:30:00.350 "ns_data": { 00:30:00.350 "id": 1, 00:30:00.350 "can_share": true 00:30:00.350 } 00:30:00.350 } 00:30:00.350 ], 00:30:00.350 "mp_policy": "active_passive" 00:30:00.350 } 00:30:00.350 } 00:30:00.350 ] 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.xFKE632jov 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:00.350 rmmod nvme_tcp 00:30:00.350 rmmod nvme_fabrics 00:30:00.350 rmmod nvme_keyring 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 338219 ']' 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 338219 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 338219 ']' 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 338219 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:00.350 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 338219 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 338219' 00:30:00.609 killing process with pid 338219 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 338219 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 338219 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.609 20:31:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:03.150 00:30:03.150 real 0m5.710s 00:30:03.150 user 0m2.134s 00:30:03.150 sys 0m1.974s 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:03.150 ************************************ 00:30:03.150 END TEST nvmf_async_init 00:30:03.150 ************************************ 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.150 ************************************ 00:30:03.150 START TEST dma 00:30:03.150 ************************************ 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:03.150 * Looking for test storage... 00:30:03.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:03.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.150 --rc genhtml_branch_coverage=1 00:30:03.150 --rc genhtml_function_coverage=1 00:30:03.150 --rc genhtml_legend=1 00:30:03.150 --rc geninfo_all_blocks=1 00:30:03.150 --rc geninfo_unexecuted_blocks=1 00:30:03.150 00:30:03.150 ' 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:03.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.150 --rc genhtml_branch_coverage=1 00:30:03.150 --rc genhtml_function_coverage=1 00:30:03.150 --rc genhtml_legend=1 00:30:03.150 --rc geninfo_all_blocks=1 00:30:03.150 --rc geninfo_unexecuted_blocks=1 00:30:03.150 00:30:03.150 ' 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:03.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.150 --rc genhtml_branch_coverage=1 00:30:03.150 --rc genhtml_function_coverage=1 00:30:03.150 --rc genhtml_legend=1 00:30:03.150 --rc geninfo_all_blocks=1 00:30:03.150 --rc geninfo_unexecuted_blocks=1 00:30:03.150 00:30:03.150 ' 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:03.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.150 --rc genhtml_branch_coverage=1 00:30:03.150 --rc genhtml_function_coverage=1 00:30:03.150 --rc genhtml_legend=1 00:30:03.150 --rc geninfo_all_blocks=1 00:30:03.150 --rc geninfo_unexecuted_blocks=1 00:30:03.150 00:30:03.150 ' 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.150 20:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:03.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:03.151 00:30:03.151 real 0m0.175s 00:30:03.151 user 0m0.117s 00:30:03.151 sys 0m0.068s 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:03.151 ************************************ 00:30:03.151 END TEST dma 00:30:03.151 ************************************ 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.151 ************************************ 00:30:03.151 START TEST nvmf_identify 00:30:03.151 ************************************ 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:03.151 * Looking for test storage... 00:30:03.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:30:03.151 20:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:03.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.151 --rc genhtml_branch_coverage=1 00:30:03.151 --rc genhtml_function_coverage=1 00:30:03.151 --rc genhtml_legend=1 00:30:03.151 --rc geninfo_all_blocks=1 00:30:03.151 --rc geninfo_unexecuted_blocks=1 00:30:03.151 00:30:03.151 ' 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:03.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.151 --rc genhtml_branch_coverage=1 00:30:03.151 --rc genhtml_function_coverage=1 00:30:03.151 --rc genhtml_legend=1 00:30:03.151 --rc geninfo_all_blocks=1 00:30:03.151 --rc geninfo_unexecuted_blocks=1 00:30:03.151 00:30:03.151 ' 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:03.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.151 --rc genhtml_branch_coverage=1 00:30:03.151 --rc genhtml_function_coverage=1 00:30:03.151 --rc genhtml_legend=1 00:30:03.151 --rc geninfo_all_blocks=1 00:30:03.151 --rc geninfo_unexecuted_blocks=1 00:30:03.151 00:30:03.151 ' 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:03.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.151 --rc genhtml_branch_coverage=1 00:30:03.151 --rc genhtml_function_coverage=1 00:30:03.151 --rc genhtml_legend=1 00:30:03.151 --rc geninfo_all_blocks=1 00:30:03.151 --rc geninfo_unexecuted_blocks=1 00:30:03.151 00:30:03.151 ' 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.151 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:03.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:03.152 20:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.058 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:05.058 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:05.317 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:05.317 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:05.317 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.317 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:05.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:30:05.318 00:30:05.318 --- 10.0.0.2 ping statistics --- 00:30:05.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.318 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:30:05.318 00:30:05.318 --- 10.0.0.1 ping statistics --- 00:30:05.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.318 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=340363 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 340363 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 340363 ']' 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.318 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.318 [2024-11-18 20:31:17.272749] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:30:05.318 [2024-11-18 20:31:17.272828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.577 [2024-11-18 20:31:17.344501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.577 [2024-11-18 20:31:17.391148] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.577 [2024-11-18 20:31:17.391211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.577 [2024-11-18 20:31:17.391234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.577 [2024-11-18 20:31:17.391244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.577 [2024-11-18 20:31:17.391254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.577 [2024-11-18 20:31:17.392781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.577 [2024-11-18 20:31:17.392838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.577 [2024-11-18 20:31:17.392905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:05.577 [2024-11-18 20:31:17.392908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.577 [2024-11-18 20:31:17.509361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.577 Malloc0 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.577 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.838 [2024-11-18 20:31:17.603776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.838 [ 00:30:05.838 { 00:30:05.838 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:05.838 "subtype": "Discovery", 00:30:05.838 "listen_addresses": [ 00:30:05.838 { 00:30:05.838 "trtype": "TCP", 00:30:05.838 "adrfam": "IPv4", 00:30:05.838 "traddr": "10.0.0.2", 00:30:05.838 "trsvcid": "4420" 00:30:05.838 } 00:30:05.838 ], 00:30:05.838 "allow_any_host": true, 00:30:05.838 "hosts": [] 00:30:05.838 }, 00:30:05.838 { 00:30:05.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.838 "subtype": "NVMe", 00:30:05.838 "listen_addresses": [ 00:30:05.838 { 00:30:05.838 "trtype": "TCP", 00:30:05.838 "adrfam": "IPv4", 00:30:05.838 "traddr": "10.0.0.2", 00:30:05.838 "trsvcid": "4420" 00:30:05.838 } 00:30:05.838 ], 00:30:05.838 "allow_any_host": true, 00:30:05.838 "hosts": [], 00:30:05.838 "serial_number": "SPDK00000000000001", 00:30:05.838 "model_number": "SPDK bdev Controller", 00:30:05.838 "max_namespaces": 32, 00:30:05.838 "min_cntlid": 1, 00:30:05.838 "max_cntlid": 65519, 00:30:05.838 "namespaces": [ 00:30:05.838 { 00:30:05.838 "nsid": 1, 00:30:05.838 "bdev_name": "Malloc0", 00:30:05.838 "name": "Malloc0", 00:30:05.838 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:05.838 "eui64": "ABCDEF0123456789", 00:30:05.838 "uuid": "007169e8-a270-48ef-b9a6-e7a6da0098a3" 00:30:05.838 } 00:30:05.838 ] 00:30:05.838 } 00:30:05.838 ] 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.838 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:05.838 [2024-11-18 20:31:17.645750] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:30:05.838 [2024-11-18 20:31:17.645795] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid340386 ] 00:30:05.838 [2024-11-18 20:31:17.696262] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:05.838 [2024-11-18 20:31:17.696333] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:05.838 [2024-11-18 20:31:17.696344] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:05.838 [2024-11-18 20:31:17.696362] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:05.838 [2024-11-18 20:31:17.696379] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:05.839 [2024-11-18 20:31:17.700105] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:05.839 [2024-11-18 20:31:17.700176] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13a8650 0 00:30:05.839 [2024-11-18 20:31:17.700308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:05.839 [2024-11-18 20:31:17.700327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:05.839 [2024-11-18 20:31:17.700336] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:05.839 [2024-11-18 20:31:17.700343] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:05.839 [2024-11-18 20:31:17.700390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.700405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.700413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a8650) 00:30:05.839 [2024-11-18 20:31:17.700434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:05.839 [2024-11-18 20:31:17.700459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1402f40, cid 0, qid 0 00:30:05.839 [2024-11-18 20:31:17.707671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.839 [2024-11-18 20:31:17.707689] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.839 [2024-11-18 20:31:17.707697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.707705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1402f40) on tqpair=0x13a8650 00:30:05.839 [2024-11-18 20:31:17.707727] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:05.839 [2024-11-18 20:31:17.707740] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:05.839 [2024-11-18 20:31:17.707750] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:05.839 [2024-11-18 20:31:17.707776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.707785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.707792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a8650) 00:30:05.839 [2024-11-18 20:31:17.707804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.839 [2024-11-18 20:31:17.707829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1402f40, cid 0, qid 0 00:30:05.839 [2024-11-18 20:31:17.707923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.839 [2024-11-18 20:31:17.707935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.839 [2024-11-18 20:31:17.707942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.707949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1402f40) on tqpair=0x13a8650 00:30:05.839 [2024-11-18 20:31:17.707959] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:05.839 [2024-11-18 20:31:17.707972] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:05.839 [2024-11-18 20:31:17.707985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.707993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.707999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a8650) 00:30:05.839 [2024-11-18 20:31:17.708009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.839 [2024-11-18 20:31:17.708030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1402f40, cid 0, qid 0 00:30:05.839 [2024-11-18 20:31:17.708117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.839 [2024-11-18 20:31:17.708131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.839 [2024-11-18 20:31:17.708138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.708149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1402f40) on tqpair=0x13a8650 00:30:05.839 [2024-11-18 20:31:17.708161] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:05.839 [2024-11-18 20:31:17.708176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:05.839 [2024-11-18 20:31:17.708188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.708196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.708202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a8650) 00:30:05.839 [2024-11-18 20:31:17.708213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.839 [2024-11-18 20:31:17.708234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1402f40, cid 0, qid 0 00:30:05.839 [2024-11-18 20:31:17.708311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.839 [2024-11-18 20:31:17.708324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.839 [2024-11-18 20:31:17.708331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.708338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1402f40) on tqpair=0x13a8650 00:30:05.839 [2024-11-18 20:31:17.708348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:05.839 [2024-11-18 20:31:17.708365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.708374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.708381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a8650) 00:30:05.839 [2024-11-18 20:31:17.708391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.839 [2024-11-18 20:31:17.708411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1402f40, cid 0, qid 0 00:30:05.839 [2024-11-18 20:31:17.708493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.839 [2024-11-18 20:31:17.708507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.839 [2024-11-18 20:31:17.708514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.708520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1402f40) on tqpair=0x13a8650 00:30:05.839 [2024-11-18 20:31:17.708529] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:05.839 [2024-11-18 20:31:17.708538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:05.839 [2024-11-18 20:31:17.708551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:05.839 [2024-11-18 20:31:17.708663] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:05.839 [2024-11-18 20:31:17.708674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:05.839 [2024-11-18 20:31:17.708691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.708699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.708705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a8650) 00:30:05.839 [2024-11-18 20:31:17.708715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.839 [2024-11-18 20:31:17.708736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1402f40, cid 0, qid 0 00:30:05.839 [2024-11-18 20:31:17.708829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.839 [2024-11-18 20:31:17.708842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.839 [2024-11-18 20:31:17.708849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.708855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1402f40) on tqpair=0x13a8650 00:30:05.839 [2024-11-18 20:31:17.708865] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:05.839 [2024-11-18 20:31:17.708882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.708891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.708897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a8650) 00:30:05.839 [2024-11-18 20:31:17.708907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.839 [2024-11-18 20:31:17.708928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1402f40, cid 0, qid 0 00:30:05.839 [2024-11-18 20:31:17.709003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.839 [2024-11-18 20:31:17.709017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.839 [2024-11-18 20:31:17.709024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.709030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1402f40) on tqpair=0x13a8650 00:30:05.839 [2024-11-18 20:31:17.709039] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:05.839 [2024-11-18 20:31:17.709048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:05.839 [2024-11-18 20:31:17.709061] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:05.839 [2024-11-18 20:31:17.709083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:05.839 [2024-11-18 20:31:17.709103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.709110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a8650) 00:30:05.839 [2024-11-18 20:31:17.709121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.839 [2024-11-18 20:31:17.709143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1402f40, cid 0, qid 0 00:30:05.839 [2024-11-18 20:31:17.709276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.839 [2024-11-18 20:31:17.709291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.839 [2024-11-18 20:31:17.709298] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.709305] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a8650): datao=0, datal=4096, cccid=0 00:30:05.839 [2024-11-18 20:31:17.709313] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1402f40) on tqpair(0x13a8650): expected_datao=0, payload_size=4096 00:30:05.839 [2024-11-18 20:31:17.709322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.839 [2024-11-18 20:31:17.709341] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.709351] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.751647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.840 [2024-11-18 20:31:17.751666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.840 [2024-11-18 20:31:17.751674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.751680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1402f40) on tqpair=0x13a8650 00:30:05.840 [2024-11-18 20:31:17.751699] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:05.840 [2024-11-18 20:31:17.751709] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:05.840 [2024-11-18 20:31:17.751716] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:05.840 [2024-11-18 20:31:17.751732] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:05.840 [2024-11-18 20:31:17.751743] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:05.840 [2024-11-18 20:31:17.751750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:05.840 [2024-11-18 20:31:17.751784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:05.840 [2024-11-18 20:31:17.751799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.751807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.751813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a8650) 00:30:05.840 [2024-11-18 20:31:17.751825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:05.840 [2024-11-18 20:31:17.751849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1402f40, cid 0, qid 0 00:30:05.840 [2024-11-18 20:31:17.751938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.840 [2024-11-18 20:31:17.751952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.840 [2024-11-18 20:31:17.751959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.751966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1402f40) on tqpair=0x13a8650 00:30:05.840 [2024-11-18 20:31:17.751980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.751988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.751994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a8650) 00:30:05.840 [2024-11-18 20:31:17.752004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.840 [2024-11-18 20:31:17.752015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.752022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.752028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13a8650) 00:30:05.840 [2024-11-18 20:31:17.752036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.840 [2024-11-18 20:31:17.752046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.752053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.752059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13a8650) 00:30:05.840 [2024-11-18 20:31:17.752067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.840 [2024-11-18 20:31:17.752077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.752084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.752090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a8650) 00:30:05.840 [2024-11-18 20:31:17.752098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.840 [2024-11-18 20:31:17.752112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:05.840 [2024-11-18 20:31:17.752129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:05.840 [2024-11-18 20:31:17.752141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.752148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a8650) 00:30:05.840 [2024-11-18 20:31:17.752158] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.840 [2024-11-18 20:31:17.752181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1402f40, cid 0, qid 0 00:30:05.840 [2024-11-18 20:31:17.752193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14030c0, cid 1, qid 0 00:30:05.840 [2024-11-18 20:31:17.752201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403240, cid 2, qid 0 00:30:05.840 [2024-11-18 20:31:17.752209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 3, qid 0 00:30:05.840 [2024-11-18 20:31:17.752217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403540, cid 4, qid 0 00:30:05.840 [2024-11-18 20:31:17.752340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.840 [2024-11-18 20:31:17.752352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.840 [2024-11-18 20:31:17.752359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.752366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403540) on tqpair=0x13a8650 00:30:05.840 [2024-11-18 20:31:17.752393] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:05.840 [2024-11-18 20:31:17.752403] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:05.840 [2024-11-18 20:31:17.752421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.752431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a8650) 00:30:05.840 [2024-11-18 20:31:17.752441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.840 [2024-11-18 20:31:17.752463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403540, cid 4, qid 0 00:30:05.840 [2024-11-18 20:31:17.752554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.840 [2024-11-18 20:31:17.752566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.840 [2024-11-18 20:31:17.752573] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.752579] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a8650): datao=0, datal=4096, cccid=4 00:30:05.840 [2024-11-18 20:31:17.752587] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1403540) on tqpair(0x13a8650): expected_datao=0, payload_size=4096 00:30:05.840 [2024-11-18 20:31:17.752594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.752610] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.752619] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.792708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.840 [2024-11-18 20:31:17.792727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.840 [2024-11-18 20:31:17.792735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.792742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403540) on tqpair=0x13a8650 00:30:05.840 [2024-11-18 20:31:17.792764] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:05.840 [2024-11-18 20:31:17.792804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.792819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a8650) 00:30:05.840 [2024-11-18 20:31:17.792831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.840 [2024-11-18 20:31:17.792844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.792851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.792857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13a8650) 00:30:05.840 [2024-11-18 20:31:17.792866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.840 [2024-11-18 20:31:17.792904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403540, cid 4, qid 0 00:30:05.840 [2024-11-18 20:31:17.792917] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14036c0, cid 5, qid 0 00:30:05.840 [2024-11-18 20:31:17.793065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.840 [2024-11-18 20:31:17.793081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.840 [2024-11-18 20:31:17.793089] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.793095] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a8650): datao=0, datal=1024, cccid=4 00:30:05.840 [2024-11-18 20:31:17.793104] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1403540) on tqpair(0x13a8650): expected_datao=0, payload_size=1024 00:30:05.840 [2024-11-18 20:31:17.793111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.793121] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.793129] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.793138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.840 [2024-11-18 20:31:17.793147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.840 [2024-11-18 20:31:17.793153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.793160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14036c0) on tqpair=0x13a8650 00:30:05.840 [2024-11-18 20:31:17.837664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.840 [2024-11-18 20:31:17.837692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.840 [2024-11-18 20:31:17.837699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.837706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403540) on tqpair=0x13a8650 00:30:05.840 [2024-11-18 20:31:17.837724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.840 [2024-11-18 20:31:17.837733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a8650) 00:30:05.840 [2024-11-18 20:31:17.837744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.840 [2024-11-18 20:31:17.837787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403540, cid 4, qid 0 00:30:05.840 [2024-11-18 20:31:17.837894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.841 [2024-11-18 20:31:17.837908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.841 [2024-11-18 20:31:17.837915] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.841 [2024-11-18 20:31:17.837922] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a8650): datao=0, datal=3072, cccid=4 00:30:05.841 [2024-11-18 20:31:17.837929] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1403540) on tqpair(0x13a8650): expected_datao=0, payload_size=3072 00:30:05.841 [2024-11-18 20:31:17.837937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.841 [2024-11-18 20:31:17.837947] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.841 [2024-11-18 20:31:17.837954] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.841 [2024-11-18 20:31:17.837970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.841 [2024-11-18 20:31:17.837981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.841 [2024-11-18 20:31:17.837988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.841 [2024-11-18 20:31:17.837995] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403540) on tqpair=0x13a8650 00:30:05.841 [2024-11-18 20:31:17.838010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.841 [2024-11-18 20:31:17.838019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a8650) 00:30:05.841 [2024-11-18 20:31:17.838029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.841 [2024-11-18 20:31:17.838057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403540, cid 4, qid 0 00:30:05.841 [2024-11-18 20:31:17.838170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.841 [2024-11-18 20:31:17.838182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.841 [2024-11-18 20:31:17.838189] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.841 [2024-11-18 20:31:17.838195] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a8650): datao=0, datal=8, cccid=4 00:30:05.841 [2024-11-18 20:31:17.838202] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1403540) on tqpair(0x13a8650): expected_datao=0, payload_size=8 00:30:05.841 [2024-11-18 20:31:17.838209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.841 [2024-11-18 20:31:17.838219] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.841 [2024-11-18 20:31:17.838226] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.104 [2024-11-18 20:31:17.878709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.104 [2024-11-18 20:31:17.878729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.104 [2024-11-18 20:31:17.878737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.104 [2024-11-18 20:31:17.878745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403540) on tqpair=0x13a8650 00:30:06.104 ===================================================== 00:30:06.104 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:06.104 ===================================================== 00:30:06.104 Controller Capabilities/Features 00:30:06.104 ================================ 00:30:06.104 Vendor ID: 0000 00:30:06.104 Subsystem Vendor ID: 0000 00:30:06.104 Serial Number: .................... 00:30:06.104 Model Number: ........................................ 00:30:06.104 Firmware Version: 25.01 00:30:06.104 Recommended Arb Burst: 0 00:30:06.104 IEEE OUI Identifier: 00 00 00 00:30:06.104 Multi-path I/O 00:30:06.104 May have multiple subsystem ports: No 00:30:06.104 May have multiple controllers: No 00:30:06.104 Associated with SR-IOV VF: No 00:30:06.104 Max Data Transfer Size: 131072 00:30:06.104 Max Number of Namespaces: 0 00:30:06.104 Max Number of I/O Queues: 1024 00:30:06.104 NVMe Specification Version (VS): 1.3 00:30:06.104 NVMe Specification Version (Identify): 1.3 00:30:06.104 Maximum Queue Entries: 128 00:30:06.104 Contiguous Queues Required: Yes 00:30:06.104 Arbitration Mechanisms Supported 00:30:06.104 Weighted Round Robin: Not Supported 00:30:06.104 Vendor Specific: Not Supported 00:30:06.104 Reset Timeout: 15000 ms 00:30:06.104 Doorbell Stride: 4 bytes 00:30:06.104 NVM Subsystem Reset: Not Supported 00:30:06.104 Command Sets Supported 00:30:06.104 NVM Command Set: Supported 00:30:06.104 Boot Partition: Not Supported 00:30:06.104 Memory Page Size Minimum: 4096 bytes 00:30:06.104 Memory Page Size Maximum: 4096 bytes 00:30:06.104 Persistent Memory Region: Not Supported 00:30:06.104 Optional Asynchronous Events Supported 00:30:06.104 Namespace Attribute Notices: Not Supported 00:30:06.104 Firmware Activation Notices: Not Supported 00:30:06.104 ANA Change Notices: Not Supported 00:30:06.104 PLE Aggregate Log Change Notices: Not Supported 00:30:06.104 LBA Status Info Alert Notices: Not Supported 00:30:06.104 EGE Aggregate Log Change Notices: Not Supported 00:30:06.104 Normal NVM Subsystem Shutdown event: Not Supported 00:30:06.104 Zone Descriptor Change Notices: Not Supported 00:30:06.104 Discovery Log Change Notices: Supported 00:30:06.104 Controller Attributes 00:30:06.104 128-bit Host Identifier: Not Supported 00:30:06.104 Non-Operational Permissive Mode: Not Supported 00:30:06.104 NVM Sets: Not Supported 00:30:06.104 Read Recovery Levels: Not Supported 00:30:06.104 Endurance Groups: Not Supported 00:30:06.104 Predictable Latency Mode: Not Supported 00:30:06.104 Traffic Based Keep ALive: Not Supported 00:30:06.104 Namespace Granularity: Not Supported 00:30:06.104 SQ Associations: Not Supported 00:30:06.104 UUID List: Not Supported 00:30:06.104 Multi-Domain Subsystem: Not Supported 00:30:06.104 Fixed Capacity Management: Not Supported 00:30:06.104 Variable Capacity Management: Not Supported 00:30:06.104 Delete Endurance Group: Not Supported 00:30:06.104 Delete NVM Set: Not Supported 00:30:06.104 Extended LBA Formats Supported: Not Supported 00:30:06.104 Flexible Data Placement Supported: Not Supported 00:30:06.104 00:30:06.104 Controller Memory Buffer Support 00:30:06.104 ================================ 00:30:06.104 Supported: No 00:30:06.104 00:30:06.104 Persistent Memory Region Support 00:30:06.104 ================================ 00:30:06.104 Supported: No 00:30:06.104 00:30:06.104 Admin Command Set Attributes 00:30:06.104 ============================ 00:30:06.105 Security Send/Receive: Not Supported 00:30:06.105 Format NVM: Not Supported 00:30:06.105 Firmware Activate/Download: Not Supported 00:30:06.105 Namespace Management: Not Supported 00:30:06.105 Device Self-Test: Not Supported 00:30:06.105 Directives: Not Supported 00:30:06.105 NVMe-MI: Not Supported 00:30:06.105 Virtualization Management: Not Supported 00:30:06.105 Doorbell Buffer Config: Not Supported 00:30:06.105 Get LBA Status Capability: Not Supported 00:30:06.105 Command & Feature Lockdown Capability: Not Supported 00:30:06.105 Abort Command Limit: 1 00:30:06.105 Async Event Request Limit: 4 00:30:06.105 Number of Firmware Slots: N/A 00:30:06.105 Firmware Slot 1 Read-Only: N/A 00:30:06.105 Firmware Activation Without Reset: N/A 00:30:06.105 Multiple Update Detection Support: N/A 00:30:06.105 Firmware Update Granularity: No Information Provided 00:30:06.105 Per-Namespace SMART Log: No 00:30:06.105 Asymmetric Namespace Access Log Page: Not Supported 00:30:06.105 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:06.105 Command Effects Log Page: Not Supported 00:30:06.105 Get Log Page Extended Data: Supported 00:30:06.105 Telemetry Log Pages: Not Supported 00:30:06.105 Persistent Event Log Pages: Not Supported 00:30:06.105 Supported Log Pages Log Page: May Support 00:30:06.105 Commands Supported & Effects Log Page: Not Supported 00:30:06.105 Feature Identifiers & Effects Log Page:May Support 00:30:06.105 NVMe-MI Commands & Effects Log Page: May Support 00:30:06.105 Data Area 4 for Telemetry Log: Not Supported 00:30:06.105 Error Log Page Entries Supported: 128 00:30:06.105 Keep Alive: Not Supported 00:30:06.105 00:30:06.105 NVM Command Set Attributes 00:30:06.105 ========================== 00:30:06.105 Submission Queue Entry Size 00:30:06.105 Max: 1 00:30:06.105 Min: 1 00:30:06.105 Completion Queue Entry Size 00:30:06.105 Max: 1 00:30:06.105 Min: 1 00:30:06.105 Number of Namespaces: 0 00:30:06.105 Compare Command: Not Supported 00:30:06.105 Write Uncorrectable Command: Not Supported 00:30:06.105 Dataset Management Command: Not Supported 00:30:06.105 Write Zeroes Command: Not Supported 00:30:06.105 Set Features Save Field: Not Supported 00:30:06.105 Reservations: Not Supported 00:30:06.105 Timestamp: Not Supported 00:30:06.105 Copy: Not Supported 00:30:06.105 Volatile Write Cache: Not Present 00:30:06.105 Atomic Write Unit (Normal): 1 00:30:06.105 Atomic Write Unit (PFail): 1 00:30:06.105 Atomic Compare & Write Unit: 1 00:30:06.105 Fused Compare & Write: Supported 00:30:06.105 Scatter-Gather List 00:30:06.105 SGL Command Set: Supported 00:30:06.105 SGL Keyed: Supported 00:30:06.105 SGL Bit Bucket Descriptor: Not Supported 00:30:06.105 SGL Metadata Pointer: Not Supported 00:30:06.105 Oversized SGL: Not Supported 00:30:06.105 SGL Metadata Address: Not Supported 00:30:06.105 SGL Offset: Supported 00:30:06.105 Transport SGL Data Block: Not Supported 00:30:06.105 Replay Protected Memory Block: Not Supported 00:30:06.105 00:30:06.105 Firmware Slot Information 00:30:06.105 ========================= 00:30:06.105 Active slot: 0 00:30:06.105 00:30:06.105 00:30:06.105 Error Log 00:30:06.105 ========= 00:30:06.105 00:30:06.105 Active Namespaces 00:30:06.105 ================= 00:30:06.105 Discovery Log Page 00:30:06.105 ================== 00:30:06.105 Generation Counter: 2 00:30:06.105 Number of Records: 2 00:30:06.105 Record Format: 0 00:30:06.105 00:30:06.105 Discovery Log Entry 0 00:30:06.105 ---------------------- 00:30:06.105 Transport Type: 3 (TCP) 00:30:06.105 Address Family: 1 (IPv4) 00:30:06.105 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:06.105 Entry Flags: 00:30:06.105 Duplicate Returned Information: 1 00:30:06.105 Explicit Persistent Connection Support for Discovery: 1 00:30:06.105 Transport Requirements: 00:30:06.105 Secure Channel: Not Required 00:30:06.105 Port ID: 0 (0x0000) 00:30:06.105 Controller ID: 65535 (0xffff) 00:30:06.105 Admin Max SQ Size: 128 00:30:06.105 Transport Service Identifier: 4420 00:30:06.105 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:06.105 Transport Address: 10.0.0.2 00:30:06.105 Discovery Log Entry 1 00:30:06.105 ---------------------- 00:30:06.105 Transport Type: 3 (TCP) 00:30:06.105 Address Family: 1 (IPv4) 00:30:06.105 Subsystem Type: 2 (NVM Subsystem) 00:30:06.105 Entry Flags: 00:30:06.105 Duplicate Returned Information: 0 00:30:06.105 Explicit Persistent Connection Support for Discovery: 0 00:30:06.105 Transport Requirements: 00:30:06.105 Secure Channel: Not Required 00:30:06.105 Port ID: 0 (0x0000) 00:30:06.105 Controller ID: 65535 (0xffff) 00:30:06.105 Admin Max SQ Size: 128 00:30:06.105 Transport Service Identifier: 4420 00:30:06.105 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:06.105 Transport Address: 10.0.0.2 [2024-11-18 20:31:17.878873] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:06.105 [2024-11-18 20:31:17.878897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1402f40) on tqpair=0x13a8650 00:30:06.105 [2024-11-18 20:31:17.878911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.105 [2024-11-18 20:31:17.878921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14030c0) on tqpair=0x13a8650 00:30:06.105 [2024-11-18 20:31:17.878929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.105 [2024-11-18 20:31:17.878937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403240) on tqpair=0x13a8650 00:30:06.105 [2024-11-18 20:31:17.878945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.105 [2024-11-18 20:31:17.878953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a8650 00:30:06.105 [2024-11-18 20:31:17.878960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.105 [2024-11-18 20:31:17.878979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.105 [2024-11-18 20:31:17.879003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.105 [2024-11-18 20:31:17.879009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a8650) 00:30:06.105 [2024-11-18 20:31:17.879021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.105 [2024-11-18 20:31:17.879046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 3, qid 0 00:30:06.105 [2024-11-18 20:31:17.879136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.105 [2024-11-18 20:31:17.879150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.105 [2024-11-18 20:31:17.879158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.105 [2024-11-18 20:31:17.879165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a8650 00:30:06.105 [2024-11-18 20:31:17.879177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.105 [2024-11-18 20:31:17.879185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.105 [2024-11-18 20:31:17.879192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a8650) 00:30:06.105 [2024-11-18 20:31:17.879203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.105 [2024-11-18 20:31:17.879229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 3, qid 0 00:30:06.105 [2024-11-18 20:31:17.879324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.105 [2024-11-18 20:31:17.879337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.105 [2024-11-18 20:31:17.879344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.105 [2024-11-18 20:31:17.879351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a8650 00:30:06.105 [2024-11-18 20:31:17.879360] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:06.105 [2024-11-18 20:31:17.879368] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:06.105 [2024-11-18 20:31:17.879384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.105 [2024-11-18 20:31:17.879393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.105 [2024-11-18 20:31:17.879399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a8650) 00:30:06.105 [2024-11-18 20:31:17.879410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.105 [2024-11-18 20:31:17.879430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 3, qid 0 00:30:06.105 [2024-11-18 20:31:17.879517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.105 [2024-11-18 20:31:17.879530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.105 [2024-11-18 20:31:17.879537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.105 [2024-11-18 20:31:17.879544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a8650 00:30:06.105 [2024-11-18 20:31:17.879561] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.105 [2024-11-18 20:31:17.879570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.879577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a8650) 00:30:06.106 [2024-11-18 20:31:17.879587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.106 [2024-11-18 20:31:17.879608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 3, qid 0 00:30:06.106 [2024-11-18 20:31:17.879695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.106 [2024-11-18 20:31:17.879709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.106 [2024-11-18 20:31:17.879716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.879723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a8650 00:30:06.106 [2024-11-18 20:31:17.879740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.879749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.879755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a8650) 00:30:06.106 [2024-11-18 20:31:17.879766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.106 [2024-11-18 20:31:17.879792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 3, qid 0 00:30:06.106 [2024-11-18 20:31:17.879877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.106 [2024-11-18 20:31:17.879891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.106 [2024-11-18 20:31:17.879898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.879904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a8650 00:30:06.106 [2024-11-18 20:31:17.879920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.879930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.879937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a8650) 00:30:06.106 [2024-11-18 20:31:17.879947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.106 [2024-11-18 20:31:17.879968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 3, qid 0 00:30:06.106 [2024-11-18 20:31:17.880041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.106 [2024-11-18 20:31:17.880052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.106 [2024-11-18 20:31:17.880060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.880066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a8650 00:30:06.106 [2024-11-18 20:31:17.880082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.880091] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.880098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a8650) 00:30:06.106 [2024-11-18 20:31:17.880108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.106 [2024-11-18 20:31:17.880129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 3, qid 0 00:30:06.106 [2024-11-18 20:31:17.880202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.106 [2024-11-18 20:31:17.880214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.106 [2024-11-18 20:31:17.880221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.880227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a8650 00:30:06.106 [2024-11-18 20:31:17.880243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.880253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.880259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a8650) 00:30:06.106 [2024-11-18 20:31:17.880269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.106 [2024-11-18 20:31:17.880289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 3, qid 0 00:30:06.106 [2024-11-18 20:31:17.880365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.106 [2024-11-18 20:31:17.880377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.106 [2024-11-18 20:31:17.880384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.880391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a8650 00:30:06.106 [2024-11-18 20:31:17.880407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.880416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.880423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a8650) 00:30:06.106 [2024-11-18 20:31:17.880433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.106 [2024-11-18 20:31:17.880457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 3, qid 0 00:30:06.106 [2024-11-18 20:31:17.880538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.106 [2024-11-18 20:31:17.880552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.106 [2024-11-18 20:31:17.880559] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.880565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a8650 00:30:06.106 [2024-11-18 20:31:17.880581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.880590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.880597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a8650) 00:30:06.106 [2024-11-18 20:31:17.880608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.106 [2024-11-18 20:31:17.880628] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 3, qid 0 00:30:06.106 [2024-11-18 20:31:17.884671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.106 [2024-11-18 20:31:17.884685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.106 [2024-11-18 20:31:17.884692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.884699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a8650 00:30:06.106 [2024-11-18 20:31:17.884716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.884740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.884747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a8650) 00:30:06.106 [2024-11-18 20:31:17.884758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.106 [2024-11-18 20:31:17.884780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 3, qid 0 00:30:06.106 [2024-11-18 20:31:17.884865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.106 [2024-11-18 20:31:17.884877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.106 [2024-11-18 20:31:17.884884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.884891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a8650 00:30:06.106 [2024-11-18 20:31:17.884903] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:30:06.106 00:30:06.106 20:31:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:06.106 [2024-11-18 20:31:17.919504] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:30:06.106 [2024-11-18 20:31:17.919553] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid340512 ] 00:30:06.106 [2024-11-18 20:31:17.968322] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:06.106 [2024-11-18 20:31:17.968377] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:06.106 [2024-11-18 20:31:17.968388] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:06.106 [2024-11-18 20:31:17.968402] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:06.106 [2024-11-18 20:31:17.968419] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:06.106 [2024-11-18 20:31:17.971948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:06.106 [2024-11-18 20:31:17.971984] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7d2650 0 00:30:06.106 [2024-11-18 20:31:17.979673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:06.106 [2024-11-18 20:31:17.979699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:06.106 [2024-11-18 20:31:17.979707] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:06.106 [2024-11-18 20:31:17.979713] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:06.106 [2024-11-18 20:31:17.979758] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.979770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.979777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d2650) 00:30:06.106 [2024-11-18 20:31:17.979792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:06.106 [2024-11-18 20:31:17.979818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82cf40, cid 0, qid 0 00:30:06.106 [2024-11-18 20:31:17.986651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.106 [2024-11-18 20:31:17.986670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.106 [2024-11-18 20:31:17.986678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.106 [2024-11-18 20:31:17.986685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82cf40) on tqpair=0x7d2650 00:30:06.106 [2024-11-18 20:31:17.986704] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:06.106 [2024-11-18 20:31:17.986716] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:06.107 [2024-11-18 20:31:17.986726] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:06.107 [2024-11-18 20:31:17.986744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.986753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.986760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d2650) 00:30:06.107 [2024-11-18 20:31:17.986772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.107 [2024-11-18 20:31:17.986796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82cf40, cid 0, qid 0 00:30:06.107 [2024-11-18 20:31:17.986913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.107 [2024-11-18 20:31:17.986926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.107 [2024-11-18 20:31:17.986933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.986939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82cf40) on tqpair=0x7d2650 00:30:06.107 [2024-11-18 20:31:17.986948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:06.107 [2024-11-18 20:31:17.986961] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:06.107 [2024-11-18 20:31:17.986973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.986981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.986987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d2650) 00:30:06.107 [2024-11-18 20:31:17.986998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.107 [2024-11-18 20:31:17.987019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82cf40, cid 0, qid 0 00:30:06.107 [2024-11-18 20:31:17.987100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.107 [2024-11-18 20:31:17.987115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.107 [2024-11-18 20:31:17.987122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.987129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82cf40) on tqpair=0x7d2650 00:30:06.107 [2024-11-18 20:31:17.987137] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:06.107 [2024-11-18 20:31:17.987152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:06.107 [2024-11-18 20:31:17.987164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.987172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.987179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d2650) 00:30:06.107 [2024-11-18 20:31:17.987189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.107 [2024-11-18 20:31:17.987210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82cf40, cid 0, qid 0 00:30:06.107 [2024-11-18 20:31:17.987290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.107 [2024-11-18 20:31:17.987302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.107 [2024-11-18 20:31:17.987309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.987316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82cf40) on tqpair=0x7d2650 00:30:06.107 [2024-11-18 20:31:17.987324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:06.107 [2024-11-18 20:31:17.987340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.987348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.987355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d2650) 00:30:06.107 [2024-11-18 20:31:17.987365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.107 [2024-11-18 20:31:17.987386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82cf40, cid 0, qid 0 00:30:06.107 [2024-11-18 20:31:17.987467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.107 [2024-11-18 20:31:17.987480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.107 [2024-11-18 20:31:17.987488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.987494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82cf40) on tqpair=0x7d2650 00:30:06.107 [2024-11-18 20:31:17.987502] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:06.107 [2024-11-18 20:31:17.987510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:06.107 [2024-11-18 20:31:17.987523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:06.107 [2024-11-18 20:31:17.987634] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:06.107 [2024-11-18 20:31:17.987654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:06.107 [2024-11-18 20:31:17.987666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.987674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.987681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d2650) 00:30:06.107 [2024-11-18 20:31:17.987691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.107 [2024-11-18 20:31:17.987725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82cf40, cid 0, qid 0 00:30:06.107 [2024-11-18 20:31:17.987831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.107 [2024-11-18 20:31:17.987844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.107 [2024-11-18 20:31:17.987851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.987858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82cf40) on tqpair=0x7d2650 00:30:06.107 [2024-11-18 20:31:17.987866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:06.107 [2024-11-18 20:31:17.987882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.987891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.987898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d2650) 00:30:06.107 [2024-11-18 20:31:17.987908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.107 [2024-11-18 20:31:17.987929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82cf40, cid 0, qid 0 00:30:06.107 [2024-11-18 20:31:17.988007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.107 [2024-11-18 20:31:17.988021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.107 [2024-11-18 20:31:17.988028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.988035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82cf40) on tqpair=0x7d2650 00:30:06.107 [2024-11-18 20:31:17.988042] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:06.107 [2024-11-18 20:31:17.988051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:06.107 [2024-11-18 20:31:17.988064] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:06.107 [2024-11-18 20:31:17.988079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:06.107 [2024-11-18 20:31:17.988092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.988100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d2650) 00:30:06.107 [2024-11-18 20:31:17.988111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.107 [2024-11-18 20:31:17.988133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82cf40, cid 0, qid 0 00:30:06.107 [2024-11-18 20:31:17.988247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.107 [2024-11-18 20:31:17.988261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.107 [2024-11-18 20:31:17.988268] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.988274] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d2650): datao=0, datal=4096, cccid=0 00:30:06.107 [2024-11-18 20:31:17.988282] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x82cf40) on tqpair(0x7d2650): expected_datao=0, payload_size=4096 00:30:06.107 [2024-11-18 20:31:17.988289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.988307] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.988316] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.988346] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.107 [2024-11-18 20:31:17.988359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.107 [2024-11-18 20:31:17.988366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.107 [2024-11-18 20:31:17.988377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82cf40) on tqpair=0x7d2650 00:30:06.107 [2024-11-18 20:31:17.988388] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:06.107 [2024-11-18 20:31:17.988397] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:06.107 [2024-11-18 20:31:17.988405] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:06.107 [2024-11-18 20:31:17.988416] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:06.107 [2024-11-18 20:31:17.988425] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:06.107 [2024-11-18 20:31:17.988433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:06.108 [2024-11-18 20:31:17.988453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:06.108 [2024-11-18 20:31:17.988466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.988474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.988481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d2650) 00:30:06.108 [2024-11-18 20:31:17.988491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.108 [2024-11-18 20:31:17.988513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82cf40, cid 0, qid 0 00:30:06.108 [2024-11-18 20:31:17.988587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.108 [2024-11-18 20:31:17.988599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.108 [2024-11-18 20:31:17.988606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.988613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82cf40) on tqpair=0x7d2650 00:30:06.108 [2024-11-18 20:31:17.988623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.988631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.988652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d2650) 00:30:06.108 [2024-11-18 20:31:17.988664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.108 [2024-11-18 20:31:17.988674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.988682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.988688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7d2650) 00:30:06.108 [2024-11-18 20:31:17.988697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.108 [2024-11-18 20:31:17.988707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.988714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.988720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7d2650) 00:30:06.108 [2024-11-18 20:31:17.988728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.108 [2024-11-18 20:31:17.988738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.988745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.988752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d2650) 00:30:06.108 [2024-11-18 20:31:17.988760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.108 [2024-11-18 20:31:17.988773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:06.108 [2024-11-18 20:31:17.988789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:06.108 [2024-11-18 20:31:17.988801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.988823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7d2650) 00:30:06.108 [2024-11-18 20:31:17.988834] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.108 [2024-11-18 20:31:17.988856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82cf40, cid 0, qid 0 00:30:06.108 [2024-11-18 20:31:17.988867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d0c0, cid 1, qid 0 00:30:06.108 [2024-11-18 20:31:17.988890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d240, cid 2, qid 0 00:30:06.108 [2024-11-18 20:31:17.988898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d3c0, cid 3, qid 0 00:30:06.108 [2024-11-18 20:31:17.988906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d540, cid 4, qid 0 00:30:06.108 [2024-11-18 20:31:17.989052] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.108 [2024-11-18 20:31:17.989066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.108 [2024-11-18 20:31:17.989073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.989079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d540) on tqpair=0x7d2650 00:30:06.108 [2024-11-18 20:31:17.989092] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:06.108 [2024-11-18 20:31:17.989102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:06.108 [2024-11-18 20:31:17.989117] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:06.108 [2024-11-18 20:31:17.989130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:06.108 [2024-11-18 20:31:17.989141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.989148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.989155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7d2650) 00:30:06.108 [2024-11-18 20:31:17.989165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.108 [2024-11-18 20:31:17.989200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d540, cid 4, qid 0 00:30:06.108 [2024-11-18 20:31:17.989360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.108 [2024-11-18 20:31:17.989372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.108 [2024-11-18 20:31:17.989379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.989386] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d540) on tqpair=0x7d2650 00:30:06.108 [2024-11-18 20:31:17.989455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:06.108 [2024-11-18 20:31:17.989476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:06.108 [2024-11-18 20:31:17.989492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.989499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7d2650) 00:30:06.108 [2024-11-18 20:31:17.989510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.108 [2024-11-18 20:31:17.989535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d540, cid 4, qid 0 00:30:06.108 [2024-11-18 20:31:17.989644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.108 [2024-11-18 20:31:17.989657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.108 [2024-11-18 20:31:17.989665] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.989671] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d2650): datao=0, datal=4096, cccid=4 00:30:06.108 [2024-11-18 20:31:17.989679] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x82d540) on tqpair(0x7d2650): expected_datao=0, payload_size=4096 00:30:06.108 [2024-11-18 20:31:17.989686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.989703] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:17.989712] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:18.032664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.108 [2024-11-18 20:31:18.032682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.108 [2024-11-18 20:31:18.032690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:18.032697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d540) on tqpair=0x7d2650 00:30:06.108 [2024-11-18 20:31:18.032722] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:06.108 [2024-11-18 20:31:18.032740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:06.108 [2024-11-18 20:31:18.032779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:06.108 [2024-11-18 20:31:18.032794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:18.032802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7d2650) 00:30:06.108 [2024-11-18 20:31:18.032813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.108 [2024-11-18 20:31:18.032838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d540, cid 4, qid 0 00:30:06.108 [2024-11-18 20:31:18.032994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.108 [2024-11-18 20:31:18.033007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.108 [2024-11-18 20:31:18.033014] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:18.033020] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d2650): datao=0, datal=4096, cccid=4 00:30:06.108 [2024-11-18 20:31:18.033028] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x82d540) on tqpair(0x7d2650): expected_datao=0, payload_size=4096 00:30:06.108 [2024-11-18 20:31:18.033035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:18.033052] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:18.033061] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:18.073758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.108 [2024-11-18 20:31:18.073777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.108 [2024-11-18 20:31:18.073785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.108 [2024-11-18 20:31:18.073793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d540) on tqpair=0x7d2650 00:30:06.108 [2024-11-18 20:31:18.073819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:06.108 [2024-11-18 20:31:18.073840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:06.109 [2024-11-18 20:31:18.073859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.073868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7d2650) 00:30:06.109 [2024-11-18 20:31:18.073879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.109 [2024-11-18 20:31:18.073903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d540, cid 4, qid 0 00:30:06.109 [2024-11-18 20:31:18.073989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.109 [2024-11-18 20:31:18.074004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.109 [2024-11-18 20:31:18.074011] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074017] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d2650): datao=0, datal=4096, cccid=4 00:30:06.109 [2024-11-18 20:31:18.074025] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x82d540) on tqpair(0x7d2650): expected_datao=0, payload_size=4096 00:30:06.109 [2024-11-18 20:31:18.074033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074050] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074059] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.109 [2024-11-18 20:31:18.074081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.109 [2024-11-18 20:31:18.074087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d540) on tqpair=0x7d2650 00:30:06.109 [2024-11-18 20:31:18.074108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:06.109 [2024-11-18 20:31:18.074124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:06.109 [2024-11-18 20:31:18.074141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:06.109 [2024-11-18 20:31:18.074154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:06.109 [2024-11-18 20:31:18.074163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:06.109 [2024-11-18 20:31:18.074172] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:06.109 [2024-11-18 20:31:18.074182] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:06.109 [2024-11-18 20:31:18.074190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:06.109 [2024-11-18 20:31:18.074199] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:06.109 [2024-11-18 20:31:18.074220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7d2650) 00:30:06.109 [2024-11-18 20:31:18.074239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.109 [2024-11-18 20:31:18.074251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7d2650) 00:30:06.109 [2024-11-18 20:31:18.074274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.109 [2024-11-18 20:31:18.074318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d540, cid 4, qid 0 00:30:06.109 [2024-11-18 20:31:18.074331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d6c0, cid 5, qid 0 00:30:06.109 [2024-11-18 20:31:18.074496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.109 [2024-11-18 20:31:18.074509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.109 [2024-11-18 20:31:18.074516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d540) on tqpair=0x7d2650 00:30:06.109 [2024-11-18 20:31:18.074534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.109 [2024-11-18 20:31:18.074544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.109 [2024-11-18 20:31:18.074550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d6c0) on tqpair=0x7d2650 00:30:06.109 [2024-11-18 20:31:18.074572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7d2650) 00:30:06.109 [2024-11-18 20:31:18.074592] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.109 [2024-11-18 20:31:18.074613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d6c0, cid 5, qid 0 00:30:06.109 [2024-11-18 20:31:18.074705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.109 [2024-11-18 20:31:18.074719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.109 [2024-11-18 20:31:18.074726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d6c0) on tqpair=0x7d2650 00:30:06.109 [2024-11-18 20:31:18.074749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7d2650) 00:30:06.109 [2024-11-18 20:31:18.074768] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.109 [2024-11-18 20:31:18.074789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d6c0, cid 5, qid 0 00:30:06.109 [2024-11-18 20:31:18.074886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.109 [2024-11-18 20:31:18.074900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.109 [2024-11-18 20:31:18.074908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d6c0) on tqpair=0x7d2650 00:30:06.109 [2024-11-18 20:31:18.074931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.074940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7d2650) 00:30:06.109 [2024-11-18 20:31:18.074950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.109 [2024-11-18 20:31:18.074971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d6c0, cid 5, qid 0 00:30:06.109 [2024-11-18 20:31:18.075047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.109 [2024-11-18 20:31:18.075059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.109 [2024-11-18 20:31:18.075066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.075073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d6c0) on tqpair=0x7d2650 00:30:06.109 [2024-11-18 20:31:18.075097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.075107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7d2650) 00:30:06.109 [2024-11-18 20:31:18.075121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.109 [2024-11-18 20:31:18.075134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.075141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7d2650) 00:30:06.109 [2024-11-18 20:31:18.075151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.109 [2024-11-18 20:31:18.075162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.109 [2024-11-18 20:31:18.075169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x7d2650) 00:30:06.109 [2024-11-18 20:31:18.075179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.110 [2024-11-18 20:31:18.075191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7d2650) 00:30:06.110 [2024-11-18 20:31:18.075208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.110 [2024-11-18 20:31:18.075230] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d6c0, cid 5, qid 0 00:30:06.110 [2024-11-18 20:31:18.075241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d540, cid 4, qid 0 00:30:06.110 [2024-11-18 20:31:18.075249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d840, cid 6, qid 0 00:30:06.110 [2024-11-18 20:31:18.075257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d9c0, cid 7, qid 0 00:30:06.110 [2024-11-18 20:31:18.075467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.110 [2024-11-18 20:31:18.075482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.110 [2024-11-18 20:31:18.075489] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075496] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d2650): datao=0, datal=8192, cccid=5 00:30:06.110 [2024-11-18 20:31:18.075504] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x82d6c0) on tqpair(0x7d2650): expected_datao=0, payload_size=8192 00:30:06.110 [2024-11-18 20:31:18.075511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075521] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075530] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.110 [2024-11-18 20:31:18.075548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.110 [2024-11-18 20:31:18.075554] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075561] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d2650): datao=0, datal=512, cccid=4 00:30:06.110 [2024-11-18 20:31:18.075568] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x82d540) on tqpair(0x7d2650): expected_datao=0, payload_size=512 00:30:06.110 [2024-11-18 20:31:18.075575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075585] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075592] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.110 [2024-11-18 20:31:18.075609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.110 [2024-11-18 20:31:18.075616] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075622] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d2650): datao=0, datal=512, cccid=6 00:30:06.110 [2024-11-18 20:31:18.075633] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x82d840) on tqpair(0x7d2650): expected_datao=0, payload_size=512 00:30:06.110 [2024-11-18 20:31:18.075650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075661] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075668] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.110 [2024-11-18 20:31:18.075686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.110 [2024-11-18 20:31:18.075692] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075699] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d2650): datao=0, datal=4096, cccid=7 00:30:06.110 [2024-11-18 20:31:18.075706] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x82d9c0) on tqpair(0x7d2650): expected_datao=0, payload_size=4096 00:30:06.110 [2024-11-18 20:31:18.075713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075734] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.110 [2024-11-18 20:31:18.075743] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.372 [2024-11-18 20:31:18.115762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.372 [2024-11-18 20:31:18.115782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.372 [2024-11-18 20:31:18.115791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.372 [2024-11-18 20:31:18.115798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d6c0) on tqpair=0x7d2650 00:30:06.372 [2024-11-18 20:31:18.115823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.372 [2024-11-18 20:31:18.115835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.372 [2024-11-18 20:31:18.115842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.372 [2024-11-18 20:31:18.115848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d540) on tqpair=0x7d2650 00:30:06.372 [2024-11-18 20:31:18.115864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.372 [2024-11-18 20:31:18.115875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.372 [2024-11-18 20:31:18.115881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.372 [2024-11-18 20:31:18.115888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d840) on tqpair=0x7d2650 00:30:06.372 [2024-11-18 20:31:18.115898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.372 [2024-11-18 20:31:18.115909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.372 [2024-11-18 20:31:18.115916] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.372 [2024-11-18 20:31:18.115922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d9c0) on tqpair=0x7d2650 00:30:06.372 ===================================================== 00:30:06.372 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:06.372 ===================================================== 00:30:06.372 Controller Capabilities/Features 00:30:06.372 ================================ 00:30:06.372 Vendor ID: 8086 00:30:06.372 Subsystem Vendor ID: 8086 00:30:06.372 Serial Number: SPDK00000000000001 00:30:06.372 Model Number: SPDK bdev Controller 00:30:06.372 Firmware Version: 25.01 00:30:06.372 Recommended Arb Burst: 6 00:30:06.372 IEEE OUI Identifier: e4 d2 5c 00:30:06.372 Multi-path I/O 00:30:06.372 May have multiple subsystem ports: Yes 00:30:06.372 May have multiple controllers: Yes 00:30:06.372 Associated with SR-IOV VF: No 00:30:06.372 Max Data Transfer Size: 131072 00:30:06.372 Max Number of Namespaces: 32 00:30:06.372 Max Number of I/O Queues: 127 00:30:06.372 NVMe Specification Version (VS): 1.3 00:30:06.372 NVMe Specification Version (Identify): 1.3 00:30:06.372 Maximum Queue Entries: 128 00:30:06.372 Contiguous Queues Required: Yes 00:30:06.372 Arbitration Mechanisms Supported 00:30:06.372 Weighted Round Robin: Not Supported 00:30:06.372 Vendor Specific: Not Supported 00:30:06.372 Reset Timeout: 15000 ms 00:30:06.372 Doorbell Stride: 4 bytes 00:30:06.372 NVM Subsystem Reset: Not Supported 00:30:06.372 Command Sets Supported 00:30:06.372 NVM Command Set: Supported 00:30:06.372 Boot Partition: Not Supported 00:30:06.372 Memory Page Size Minimum: 4096 bytes 00:30:06.372 Memory Page Size Maximum: 4096 bytes 00:30:06.372 Persistent Memory Region: Not Supported 00:30:06.372 Optional Asynchronous Events Supported 00:30:06.372 Namespace Attribute Notices: Supported 00:30:06.372 Firmware Activation Notices: Not Supported 00:30:06.372 ANA Change Notices: Not Supported 00:30:06.372 PLE Aggregate Log Change Notices: Not Supported 00:30:06.372 LBA Status Info Alert Notices: Not Supported 00:30:06.372 EGE Aggregate Log Change Notices: Not Supported 00:30:06.372 Normal NVM Subsystem Shutdown event: Not Supported 00:30:06.372 Zone Descriptor Change Notices: Not Supported 00:30:06.372 Discovery Log Change Notices: Not Supported 00:30:06.372 Controller Attributes 00:30:06.372 128-bit Host Identifier: Supported 00:30:06.372 Non-Operational Permissive Mode: Not Supported 00:30:06.372 NVM Sets: Not Supported 00:30:06.372 Read Recovery Levels: Not Supported 00:30:06.372 Endurance Groups: Not Supported 00:30:06.372 Predictable Latency Mode: Not Supported 00:30:06.372 Traffic Based Keep ALive: Not Supported 00:30:06.372 Namespace Granularity: Not Supported 00:30:06.372 SQ Associations: Not Supported 00:30:06.372 UUID List: Not Supported 00:30:06.372 Multi-Domain Subsystem: Not Supported 00:30:06.372 Fixed Capacity Management: Not Supported 00:30:06.372 Variable Capacity Management: Not Supported 00:30:06.372 Delete Endurance Group: Not Supported 00:30:06.372 Delete NVM Set: Not Supported 00:30:06.372 Extended LBA Formats Supported: Not Supported 00:30:06.372 Flexible Data Placement Supported: Not Supported 00:30:06.372 00:30:06.372 Controller Memory Buffer Support 00:30:06.372 ================================ 00:30:06.372 Supported: No 00:30:06.372 00:30:06.372 Persistent Memory Region Support 00:30:06.372 ================================ 00:30:06.372 Supported: No 00:30:06.372 00:30:06.372 Admin Command Set Attributes 00:30:06.372 ============================ 00:30:06.372 Security Send/Receive: Not Supported 00:30:06.372 Format NVM: Not Supported 00:30:06.372 Firmware Activate/Download: Not Supported 00:30:06.372 Namespace Management: Not Supported 00:30:06.372 Device Self-Test: Not Supported 00:30:06.372 Directives: Not Supported 00:30:06.372 NVMe-MI: Not Supported 00:30:06.372 Virtualization Management: Not Supported 00:30:06.372 Doorbell Buffer Config: Not Supported 00:30:06.372 Get LBA Status Capability: Not Supported 00:30:06.372 Command & Feature Lockdown Capability: Not Supported 00:30:06.372 Abort Command Limit: 4 00:30:06.372 Async Event Request Limit: 4 00:30:06.372 Number of Firmware Slots: N/A 00:30:06.372 Firmware Slot 1 Read-Only: N/A 00:30:06.372 Firmware Activation Without Reset: N/A 00:30:06.372 Multiple Update Detection Support: N/A 00:30:06.372 Firmware Update Granularity: No Information Provided 00:30:06.372 Per-Namespace SMART Log: No 00:30:06.372 Asymmetric Namespace Access Log Page: Not Supported 00:30:06.372 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:06.372 Command Effects Log Page: Supported 00:30:06.372 Get Log Page Extended Data: Supported 00:30:06.372 Telemetry Log Pages: Not Supported 00:30:06.372 Persistent Event Log Pages: Not Supported 00:30:06.372 Supported Log Pages Log Page: May Support 00:30:06.372 Commands Supported & Effects Log Page: Not Supported 00:30:06.372 Feature Identifiers & Effects Log Page:May Support 00:30:06.372 NVMe-MI Commands & Effects Log Page: May Support 00:30:06.372 Data Area 4 for Telemetry Log: Not Supported 00:30:06.372 Error Log Page Entries Supported: 128 00:30:06.372 Keep Alive: Supported 00:30:06.372 Keep Alive Granularity: 10000 ms 00:30:06.372 00:30:06.372 NVM Command Set Attributes 00:30:06.372 ========================== 00:30:06.372 Submission Queue Entry Size 00:30:06.372 Max: 64 00:30:06.372 Min: 64 00:30:06.372 Completion Queue Entry Size 00:30:06.372 Max: 16 00:30:06.372 Min: 16 00:30:06.372 Number of Namespaces: 32 00:30:06.372 Compare Command: Supported 00:30:06.373 Write Uncorrectable Command: Not Supported 00:30:06.373 Dataset Management Command: Supported 00:30:06.373 Write Zeroes Command: Supported 00:30:06.373 Set Features Save Field: Not Supported 00:30:06.373 Reservations: Supported 00:30:06.373 Timestamp: Not Supported 00:30:06.373 Copy: Supported 00:30:06.373 Volatile Write Cache: Present 00:30:06.373 Atomic Write Unit (Normal): 1 00:30:06.373 Atomic Write Unit (PFail): 1 00:30:06.373 Atomic Compare & Write Unit: 1 00:30:06.373 Fused Compare & Write: Supported 00:30:06.373 Scatter-Gather List 00:30:06.373 SGL Command Set: Supported 00:30:06.373 SGL Keyed: Supported 00:30:06.373 SGL Bit Bucket Descriptor: Not Supported 00:30:06.373 SGL Metadata Pointer: Not Supported 00:30:06.373 Oversized SGL: Not Supported 00:30:06.373 SGL Metadata Address: Not Supported 00:30:06.373 SGL Offset: Supported 00:30:06.373 Transport SGL Data Block: Not Supported 00:30:06.373 Replay Protected Memory Block: Not Supported 00:30:06.373 00:30:06.373 Firmware Slot Information 00:30:06.373 ========================= 00:30:06.373 Active slot: 1 00:30:06.373 Slot 1 Firmware Revision: 25.01 00:30:06.373 00:30:06.373 00:30:06.373 Commands Supported and Effects 00:30:06.373 ============================== 00:30:06.373 Admin Commands 00:30:06.373 -------------- 00:30:06.373 Get Log Page (02h): Supported 00:30:06.373 Identify (06h): Supported 00:30:06.373 Abort (08h): Supported 00:30:06.373 Set Features (09h): Supported 00:30:06.373 Get Features (0Ah): Supported 00:30:06.373 Asynchronous Event Request (0Ch): Supported 00:30:06.373 Keep Alive (18h): Supported 00:30:06.373 I/O Commands 00:30:06.373 ------------ 00:30:06.373 Flush (00h): Supported LBA-Change 00:30:06.373 Write (01h): Supported LBA-Change 00:30:06.373 Read (02h): Supported 00:30:06.373 Compare (05h): Supported 00:30:06.373 Write Zeroes (08h): Supported LBA-Change 00:30:06.373 Dataset Management (09h): Supported LBA-Change 00:30:06.373 Copy (19h): Supported LBA-Change 00:30:06.373 00:30:06.373 Error Log 00:30:06.373 ========= 00:30:06.373 00:30:06.373 Arbitration 00:30:06.373 =========== 00:30:06.373 Arbitration Burst: 1 00:30:06.373 00:30:06.373 Power Management 00:30:06.373 ================ 00:30:06.373 Number of Power States: 1 00:30:06.373 Current Power State: Power State #0 00:30:06.373 Power State #0: 00:30:06.373 Max Power: 0.00 W 00:30:06.373 Non-Operational State: Operational 00:30:06.373 Entry Latency: Not Reported 00:30:06.373 Exit Latency: Not Reported 00:30:06.373 Relative Read Throughput: 0 00:30:06.373 Relative Read Latency: 0 00:30:06.373 Relative Write Throughput: 0 00:30:06.373 Relative Write Latency: 0 00:30:06.373 Idle Power: Not Reported 00:30:06.373 Active Power: Not Reported 00:30:06.373 Non-Operational Permissive Mode: Not Supported 00:30:06.373 00:30:06.373 Health Information 00:30:06.373 ================== 00:30:06.373 Critical Warnings: 00:30:06.373 Available Spare Space: OK 00:30:06.373 Temperature: OK 00:30:06.373 Device Reliability: OK 00:30:06.373 Read Only: No 00:30:06.373 Volatile Memory Backup: OK 00:30:06.373 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:06.373 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:06.373 Available Spare: 0% 00:30:06.373 Available Spare Threshold: 0% 00:30:06.373 Life Percentage Used:[2024-11-18 20:31:18.116042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.373 [2024-11-18 20:31:18.116055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7d2650) 00:30:06.373 [2024-11-18 20:31:18.116067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.373 [2024-11-18 20:31:18.116091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d9c0, cid 7, qid 0 00:30:06.373 [2024-11-18 20:31:18.116204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.373 [2024-11-18 20:31:18.116217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.373 [2024-11-18 20:31:18.116224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.373 [2024-11-18 20:31:18.116231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d9c0) on tqpair=0x7d2650 00:30:06.373 [2024-11-18 20:31:18.116278] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:06.373 [2024-11-18 20:31:18.116297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82cf40) on tqpair=0x7d2650 00:30:06.373 [2024-11-18 20:31:18.116312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.373 [2024-11-18 20:31:18.116322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d0c0) on tqpair=0x7d2650 00:30:06.373 [2024-11-18 20:31:18.116330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.373 [2024-11-18 20:31:18.116339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d240) on tqpair=0x7d2650 00:30:06.373 [2024-11-18 20:31:18.116346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.373 [2024-11-18 20:31:18.116355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d3c0) on tqpair=0x7d2650 00:30:06.373 [2024-11-18 20:31:18.116362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.373 [2024-11-18 20:31:18.116375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.373 [2024-11-18 20:31:18.116383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.373 [2024-11-18 20:31:18.116390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d2650) 00:30:06.373 [2024-11-18 20:31:18.116400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.373 [2024-11-18 20:31:18.116423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d3c0, cid 3, qid 0 00:30:06.373 [2024-11-18 20:31:18.116526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.373 [2024-11-18 20:31:18.116539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.373 [2024-11-18 20:31:18.116546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.373 [2024-11-18 20:31:18.116553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d3c0) on tqpair=0x7d2650 00:30:06.373 [2024-11-18 20:31:18.116564] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.373 [2024-11-18 20:31:18.116572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.373 [2024-11-18 20:31:18.116578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d2650) 00:30:06.373 [2024-11-18 20:31:18.116589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.373 [2024-11-18 20:31:18.116614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d3c0, cid 3, qid 0 00:30:06.373 [2024-11-18 20:31:18.120650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.373 [2024-11-18 20:31:18.120668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.373 [2024-11-18 20:31:18.120676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.373 [2024-11-18 20:31:18.120683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d3c0) on tqpair=0x7d2650 00:30:06.373 [2024-11-18 20:31:18.120691] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:06.373 [2024-11-18 20:31:18.120700] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:06.373 [2024-11-18 20:31:18.120718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.373 [2024-11-18 20:31:18.120727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.373 [2024-11-18 20:31:18.120734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d2650) 00:30:06.373 [2024-11-18 20:31:18.120745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.373 [2024-11-18 20:31:18.120768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82d3c0, cid 3, qid 0 00:30:06.373 [2024-11-18 20:31:18.120864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.373 [2024-11-18 20:31:18.120878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.373 [2024-11-18 20:31:18.120885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.373 [2024-11-18 20:31:18.120896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82d3c0) on tqpair=0x7d2650 00:30:06.373 [2024-11-18 20:31:18.120910] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 0 milliseconds 00:30:06.373 0% 00:30:06.373 Data Units Read: 0 00:30:06.373 Data Units Written: 0 00:30:06.373 Host Read Commands: 0 00:30:06.373 Host Write Commands: 0 00:30:06.373 Controller Busy Time: 0 minutes 00:30:06.373 Power Cycles: 0 00:30:06.373 Power On Hours: 0 hours 00:30:06.373 Unsafe Shutdowns: 0 00:30:06.373 Unrecoverable Media Errors: 0 00:30:06.373 Lifetime Error Log Entries: 0 00:30:06.373 Warning Temperature Time: 0 minutes 00:30:06.373 Critical Temperature Time: 0 minutes 00:30:06.373 00:30:06.373 Number of Queues 00:30:06.373 ================ 00:30:06.373 Number of I/O Submission Queues: 127 00:30:06.373 Number of I/O Completion Queues: 127 00:30:06.373 00:30:06.373 Active Namespaces 00:30:06.373 ================= 00:30:06.373 Namespace ID:1 00:30:06.373 Error Recovery Timeout: Unlimited 00:30:06.373 Command Set Identifier: NVM (00h) 00:30:06.373 Deallocate: Supported 00:30:06.373 Deallocated/Unwritten Error: Not Supported 00:30:06.373 Deallocated Read Value: Unknown 00:30:06.373 Deallocate in Write Zeroes: Not Supported 00:30:06.373 Deallocated Guard Field: 0xFFFF 00:30:06.373 Flush: Supported 00:30:06.374 Reservation: Supported 00:30:06.374 Namespace Sharing Capabilities: Multiple Controllers 00:30:06.374 Size (in LBAs): 131072 (0GiB) 00:30:06.374 Capacity (in LBAs): 131072 (0GiB) 00:30:06.374 Utilization (in LBAs): 131072 (0GiB) 00:30:06.374 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:06.374 EUI64: ABCDEF0123456789 00:30:06.374 UUID: 007169e8-a270-48ef-b9a6-e7a6da0098a3 00:30:06.374 Thin Provisioning: Not Supported 00:30:06.374 Per-NS Atomic Units: Yes 00:30:06.374 Atomic Boundary Size (Normal): 0 00:30:06.374 Atomic Boundary Size (PFail): 0 00:30:06.374 Atomic Boundary Offset: 0 00:30:06.374 Maximum Single Source Range Length: 65535 00:30:06.374 Maximum Copy Length: 65535 00:30:06.374 Maximum Source Range Count: 1 00:30:06.374 NGUID/EUI64 Never Reused: No 00:30:06.374 Namespace Write Protected: No 00:30:06.374 Number of LBA Formats: 1 00:30:06.374 Current LBA Format: LBA Format #00 00:30:06.374 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:06.374 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:06.374 rmmod nvme_tcp 00:30:06.374 rmmod nvme_fabrics 00:30:06.374 rmmod nvme_keyring 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 340363 ']' 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 340363 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 340363 ']' 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 340363 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 340363 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 340363' 00:30:06.374 killing process with pid 340363 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 340363 00:30:06.374 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 340363 00:30:06.633 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:06.633 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:06.633 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:06.633 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:06.633 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:06.633 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:06.633 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:06.633 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:06.633 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:06.633 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.633 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.633 20:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.536 20:31:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:08.536 00:30:08.536 real 0m5.629s 00:30:08.536 user 0m5.042s 00:30:08.536 sys 0m1.948s 00:30:08.536 20:31:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.536 20:31:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:08.536 ************************************ 00:30:08.536 END TEST nvmf_identify 00:30:08.536 ************************************ 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.795 ************************************ 00:30:08.795 START TEST nvmf_perf 00:30:08.795 ************************************ 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:08.795 * Looking for test storage... 00:30:08.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.795 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:08.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.796 --rc genhtml_branch_coverage=1 00:30:08.796 --rc genhtml_function_coverage=1 00:30:08.796 --rc genhtml_legend=1 00:30:08.796 --rc geninfo_all_blocks=1 00:30:08.796 --rc geninfo_unexecuted_blocks=1 00:30:08.796 00:30:08.796 ' 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:08.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.796 --rc genhtml_branch_coverage=1 00:30:08.796 --rc genhtml_function_coverage=1 00:30:08.796 --rc genhtml_legend=1 00:30:08.796 --rc geninfo_all_blocks=1 00:30:08.796 --rc geninfo_unexecuted_blocks=1 00:30:08.796 00:30:08.796 ' 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:08.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.796 --rc genhtml_branch_coverage=1 00:30:08.796 --rc genhtml_function_coverage=1 00:30:08.796 --rc genhtml_legend=1 00:30:08.796 --rc geninfo_all_blocks=1 00:30:08.796 --rc geninfo_unexecuted_blocks=1 00:30:08.796 00:30:08.796 ' 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:08.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.796 --rc genhtml_branch_coverage=1 00:30:08.796 --rc genhtml_function_coverage=1 00:30:08.796 --rc genhtml_legend=1 00:30:08.796 --rc geninfo_all_blocks=1 00:30:08.796 --rc geninfo_unexecuted_blocks=1 00:30:08.796 00:30:08.796 ' 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:08.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:08.796 20:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:11.346 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:11.346 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:11.346 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:11.346 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:11.346 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:11.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:30:11.347 00:30:11.347 --- 10.0.0.2 ping statistics --- 00:30:11.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.347 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:30:11.347 00:30:11.347 --- 10.0.0.1 ping statistics --- 00:30:11.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.347 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:11.347 20:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=342454 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 342454 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 342454 ']' 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:11.347 [2024-11-18 20:31:23.064143] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:30:11.347 [2024-11-18 20:31:23.064231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.347 [2024-11-18 20:31:23.135881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:11.347 [2024-11-18 20:31:23.181695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.347 [2024-11-18 20:31:23.181762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.347 [2024-11-18 20:31:23.181785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.347 [2024-11-18 20:31:23.181797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.347 [2024-11-18 20:31:23.181807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.347 [2024-11-18 20:31:23.183300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.347 [2024-11-18 20:31:23.183357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.347 [2024-11-18 20:31:23.183423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.347 [2024-11-18 20:31:23.183425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:11.347 20:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:14.874 20:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:14.874 20:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:14.874 20:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:14.874 20:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:15.133 20:31:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:15.133 20:31:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:15.133 20:31:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:15.133 20:31:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:15.133 20:31:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:15.391 [2024-11-18 20:31:27.294598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.391 20:31:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:15.649 20:31:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:15.649 20:31:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:15.908 20:31:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:15.908 20:31:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:16.167 20:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.425 [2024-11-18 20:31:28.382573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.425 20:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:16.683 20:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:16.683 20:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:16.683 20:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:16.683 20:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:18.060 Initializing NVMe Controllers 00:30:18.060 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:18.060 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:18.060 Initialization complete. Launching workers. 00:30:18.060 ======================================================== 00:30:18.060 Latency(us) 00:30:18.060 Device Information : IOPS MiB/s Average min max 00:30:18.060 PCIE (0000:88:00.0) NSID 1 from core 0: 85142.55 332.59 375.25 42.86 8242.47 00:30:18.060 ======================================================== 00:30:18.060 Total : 85142.55 332.59 375.25 42.86 8242.47 00:30:18.060 00:30:18.061 20:31:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:19.435 Initializing NVMe Controllers 00:30:19.435 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:19.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:19.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:19.435 Initialization complete. Launching workers. 00:30:19.435 ======================================================== 00:30:19.435 Latency(us) 00:30:19.435 Device Information : IOPS MiB/s Average min max 00:30:19.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 171.56 0.67 5920.99 135.67 45764.63 00:30:19.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.85 0.22 17727.61 6989.69 47922.08 00:30:19.435 ======================================================== 00:30:19.435 Total : 228.42 0.89 8859.75 135.67 47922.08 00:30:19.435 00:30:19.435 20:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:20.817 Initializing NVMe Controllers 00:30:20.817 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:20.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:20.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:20.817 Initialization complete. Launching workers. 00:30:20.817 ======================================================== 00:30:20.817 Latency(us) 00:30:20.817 Device Information : IOPS MiB/s Average min max 00:30:20.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8469.00 33.08 3793.90 666.27 7560.49 00:30:20.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3909.00 15.27 8222.84 5842.01 16089.07 00:30:20.817 ======================================================== 00:30:20.817 Total : 12378.00 48.35 5192.57 666.27 16089.07 00:30:20.817 00:30:20.817 20:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:20.817 20:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:20.817 20:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:23.351 Initializing NVMe Controllers 00:30:23.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.351 Controller IO queue size 128, less than required. 00:30:23.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.351 Controller IO queue size 128, less than required. 00:30:23.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:23.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:23.351 Initialization complete. Launching workers. 00:30:23.351 ======================================================== 00:30:23.351 Latency(us) 00:30:23.351 Device Information : IOPS MiB/s Average min max 00:30:23.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1743.91 435.98 74665.89 51660.14 110137.66 00:30:23.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 585.13 146.28 236084.22 112972.34 380974.57 00:30:23.351 ======================================================== 00:30:23.351 Total : 2329.03 582.26 115219.27 51660.14 380974.57 00:30:23.351 00:30:23.351 20:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:23.351 No valid NVMe controllers or AIO or URING devices found 00:30:23.351 Initializing NVMe Controllers 00:30:23.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.351 Controller IO queue size 128, less than required. 00:30:23.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.351 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:23.351 Controller IO queue size 128, less than required. 00:30:23.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.351 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:23.351 WARNING: Some requested NVMe devices were skipped 00:30:23.351 20:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:25.886 Initializing NVMe Controllers 00:30:25.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:25.886 Controller IO queue size 128, less than required. 00:30:25.886 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:25.886 Controller IO queue size 128, less than required. 00:30:25.886 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:25.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:25.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:25.886 Initialization complete. Launching workers. 00:30:25.886 00:30:25.886 ==================== 00:30:25.886 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:25.886 TCP transport: 00:30:25.886 polls: 9611 00:30:25.886 idle_polls: 6413 00:30:25.886 sock_completions: 3198 00:30:25.886 nvme_completions: 5765 00:30:25.886 submitted_requests: 8558 00:30:25.886 queued_requests: 1 00:30:25.886 00:30:25.886 ==================== 00:30:25.886 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:25.886 TCP transport: 00:30:25.886 polls: 12557 00:30:25.886 idle_polls: 8521 00:30:25.886 sock_completions: 4036 00:30:25.886 nvme_completions: 6585 00:30:25.886 submitted_requests: 9734 00:30:25.886 queued_requests: 1 00:30:25.886 ======================================================== 00:30:25.886 Latency(us) 00:30:25.886 Device Information : IOPS MiB/s Average min max 00:30:25.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1438.50 359.63 91776.91 48453.43 156330.64 00:30:25.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1643.15 410.79 78220.73 41960.53 109059.49 00:30:25.886 ======================================================== 00:30:25.886 Total : 3081.65 770.41 84548.70 41960.53 156330.64 00:30:25.886 00:30:25.886 20:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:25.886 20:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:26.144 20:31:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:26.144 20:31:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:26.144 20:31:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:29.431 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=7a4644b5-5bdb-4e51-98dd-5b9d7ad4ceb9 00:30:29.431 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 7a4644b5-5bdb-4e51-98dd-5b9d7ad4ceb9 00:30:29.431 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=7a4644b5-5bdb-4e51-98dd-5b9d7ad4ceb9 00:30:29.431 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:29.431 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:29.431 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:29.431 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:29.690 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:29.690 { 00:30:29.690 "uuid": "7a4644b5-5bdb-4e51-98dd-5b9d7ad4ceb9", 00:30:29.690 "name": "lvs_0", 00:30:29.690 "base_bdev": "Nvme0n1", 00:30:29.690 "total_data_clusters": 238234, 00:30:29.690 "free_clusters": 238234, 00:30:29.690 "block_size": 512, 00:30:29.690 "cluster_size": 4194304 00:30:29.690 } 00:30:29.690 ]' 00:30:29.690 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="7a4644b5-5bdb-4e51-98dd-5b9d7ad4ceb9") .free_clusters' 00:30:29.690 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:29.690 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="7a4644b5-5bdb-4e51-98dd-5b9d7ad4ceb9") .cluster_size' 00:30:29.948 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:29.948 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:29.948 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:29.948 952936 00:30:29.948 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:29.948 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:29.948 20:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7a4644b5-5bdb-4e51-98dd-5b9d7ad4ceb9 lbd_0 20480 00:30:30.515 20:31:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=aec4ab21-259d-4bbb-83fc-0a28a9df970d 00:30:30.515 20:31:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore aec4ab21-259d-4bbb-83fc-0a28a9df970d lvs_n_0 00:30:31.450 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=b1b79e16-980a-4a59-909b-fb5cc82c9cb2 00:30:31.450 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb b1b79e16-980a-4a59-909b-fb5cc82c9cb2 00:30:31.450 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=b1b79e16-980a-4a59-909b-fb5cc82c9cb2 00:30:31.450 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:31.450 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:31.450 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:31.450 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:31.450 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:31.450 { 00:30:31.450 "uuid": "7a4644b5-5bdb-4e51-98dd-5b9d7ad4ceb9", 00:30:31.450 "name": "lvs_0", 00:30:31.451 "base_bdev": "Nvme0n1", 00:30:31.451 "total_data_clusters": 238234, 00:30:31.451 "free_clusters": 233114, 00:30:31.451 "block_size": 512, 00:30:31.451 "cluster_size": 4194304 00:30:31.451 }, 00:30:31.451 { 00:30:31.451 "uuid": "b1b79e16-980a-4a59-909b-fb5cc82c9cb2", 00:30:31.451 "name": "lvs_n_0", 00:30:31.451 "base_bdev": "aec4ab21-259d-4bbb-83fc-0a28a9df970d", 00:30:31.451 "total_data_clusters": 5114, 00:30:31.451 "free_clusters": 5114, 00:30:31.451 "block_size": 512, 00:30:31.451 "cluster_size": 4194304 00:30:31.451 } 00:30:31.451 ]' 00:30:31.451 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="b1b79e16-980a-4a59-909b-fb5cc82c9cb2") .free_clusters' 00:30:31.451 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:31.451 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="b1b79e16-980a-4a59-909b-fb5cc82c9cb2") .cluster_size' 00:30:31.451 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:31.451 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:31.451 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:31.451 20456 00:30:31.451 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:31.451 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b1b79e16-980a-4a59-909b-fb5cc82c9cb2 lbd_nest_0 20456 00:30:32.019 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=f72feb6b-e6b0-4bad-bea1-da81ffeb15a8 00:30:32.019 20:31:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:32.019 20:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:32.019 20:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 f72feb6b-e6b0-4bad-bea1-da81ffeb15a8 00:30:32.587 20:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.587 20:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:32.587 20:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:32.587 20:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:32.587 20:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:32.587 20:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:44.805 Initializing NVMe Controllers 00:30:44.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:44.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:44.805 Initialization complete. Launching workers. 00:30:44.805 ======================================================== 00:30:44.805 Latency(us) 00:30:44.805 Device Information : IOPS MiB/s Average min max 00:30:44.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.50 0.02 21101.68 178.19 46057.71 00:30:44.805 ======================================================== 00:30:44.805 Total : 47.50 0.02 21101.68 178.19 46057.71 00:30:44.805 00:30:44.805 20:31:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:44.805 20:31:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:54.786 Initializing NVMe Controllers 00:30:54.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:54.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:54.786 Initialization complete. Launching workers. 00:30:54.786 ======================================================== 00:30:54.786 Latency(us) 00:30:54.786 Device Information : IOPS MiB/s Average min max 00:30:54.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.70 9.96 12555.89 4970.68 47890.77 00:30:54.786 ======================================================== 00:30:54.786 Total : 79.70 9.96 12555.89 4970.68 47890.77 00:30:54.786 00:30:54.786 20:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:54.786 20:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:54.786 20:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:04.773 Initializing NVMe Controllers 00:31:04.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:04.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:04.773 Initialization complete. Launching workers. 00:31:04.773 ======================================================== 00:31:04.773 Latency(us) 00:31:04.773 Device Information : IOPS MiB/s Average min max 00:31:04.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7550.74 3.69 4238.04 301.43 10385.13 00:31:04.773 ======================================================== 00:31:04.773 Total : 7550.74 3.69 4238.04 301.43 10385.13 00:31:04.773 00:31:04.773 20:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:04.773 20:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:14.755 Initializing NVMe Controllers 00:31:14.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:14.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:14.755 Initialization complete. Launching workers. 00:31:14.755 ======================================================== 00:31:14.755 Latency(us) 00:31:14.756 Device Information : IOPS MiB/s Average min max 00:31:14.756 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3936.98 492.12 8127.54 676.03 16140.60 00:31:14.756 ======================================================== 00:31:14.756 Total : 3936.98 492.12 8127.54 676.03 16140.60 00:31:14.756 00:31:14.756 20:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:14.756 20:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:14.756 20:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:24.738 Initializing NVMe Controllers 00:31:24.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:24.738 Controller IO queue size 128, less than required. 00:31:24.738 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:24.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:24.738 Initialization complete. Launching workers. 00:31:24.738 ======================================================== 00:31:24.738 Latency(us) 00:31:24.738 Device Information : IOPS MiB/s Average min max 00:31:24.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11655.50 5.69 10988.41 1765.81 49760.85 00:31:24.738 ======================================================== 00:31:24.738 Total : 11655.50 5.69 10988.41 1765.81 49760.85 00:31:24.738 00:31:24.738 20:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:24.738 20:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:36.938 Initializing NVMe Controllers 00:31:36.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:36.938 Controller IO queue size 128, less than required. 00:31:36.938 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:36.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:36.938 Initialization complete. Launching workers. 00:31:36.938 ======================================================== 00:31:36.938 Latency(us) 00:31:36.938 Device Information : IOPS MiB/s Average min max 00:31:36.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1176.90 147.11 108880.39 9686.23 223654.37 00:31:36.938 ======================================================== 00:31:36.938 Total : 1176.90 147.11 108880.39 9686.23 223654.37 00:31:36.938 00:31:36.938 20:32:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:36.938 20:32:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f72feb6b-e6b0-4bad-bea1-da81ffeb15a8 00:31:36.938 20:32:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:36.938 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aec4ab21-259d-4bbb-83fc-0a28a9df970d 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:36.939 rmmod nvme_tcp 00:31:36.939 rmmod nvme_fabrics 00:31:36.939 rmmod nvme_keyring 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 342454 ']' 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 342454 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 342454 ']' 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 342454 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 342454 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 342454' 00:31:36.939 killing process with pid 342454 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 342454 00:31:36.939 20:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 342454 00:31:38.837 20:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:38.837 20:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:38.837 20:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:38.837 20:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:38.837 20:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:38.837 20:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:38.837 20:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:38.837 20:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:38.837 20:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:38.837 20:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.837 20:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.837 20:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.740 00:31:40.740 real 1m31.930s 00:31:40.740 user 5m37.251s 00:31:40.740 sys 0m16.422s 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:40.740 ************************************ 00:31:40.740 END TEST nvmf_perf 00:31:40.740 ************************************ 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.740 ************************************ 00:31:40.740 START TEST nvmf_fio_host 00:31:40.740 ************************************ 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:40.740 * Looking for test storage... 00:31:40.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.740 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:40.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.740 --rc genhtml_branch_coverage=1 00:31:40.740 --rc genhtml_function_coverage=1 00:31:40.740 --rc genhtml_legend=1 00:31:40.740 --rc geninfo_all_blocks=1 00:31:40.740 --rc geninfo_unexecuted_blocks=1 00:31:40.740 00:31:40.740 ' 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:40.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.741 --rc genhtml_branch_coverage=1 00:31:40.741 --rc genhtml_function_coverage=1 00:31:40.741 --rc genhtml_legend=1 00:31:40.741 --rc geninfo_all_blocks=1 00:31:40.741 --rc geninfo_unexecuted_blocks=1 00:31:40.741 00:31:40.741 ' 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:40.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.741 --rc genhtml_branch_coverage=1 00:31:40.741 --rc genhtml_function_coverage=1 00:31:40.741 --rc genhtml_legend=1 00:31:40.741 --rc geninfo_all_blocks=1 00:31:40.741 --rc geninfo_unexecuted_blocks=1 00:31:40.741 00:31:40.741 ' 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:40.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.741 --rc genhtml_branch_coverage=1 00:31:40.741 --rc genhtml_function_coverage=1 00:31:40.741 --rc genhtml_legend=1 00:31:40.741 --rc geninfo_all_blocks=1 00:31:40.741 --rc geninfo_unexecuted_blocks=1 00:31:40.741 00:31:40.741 ' 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.741 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:40.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.742 20:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:43.276 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:43.276 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:43.276 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:43.276 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:43.276 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:43.277 20:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:43.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:31:43.277 00:31:43.277 --- 10.0.0.2 ping statistics --- 00:31:43.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.277 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:31:43.277 00:31:43.277 --- 10.0.0.1 ping statistics --- 00:31:43.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.277 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=354558 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 354558 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 354558 ']' 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:43.277 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.277 [2024-11-18 20:32:55.112282] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:31:43.277 [2024-11-18 20:32:55.112375] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:43.277 [2024-11-18 20:32:55.185452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:43.277 [2024-11-18 20:32:55.228823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:43.277 [2024-11-18 20:32:55.228878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:43.277 [2024-11-18 20:32:55.228907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:43.277 [2024-11-18 20:32:55.228919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:43.277 [2024-11-18 20:32:55.228928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:43.277 [2024-11-18 20:32:55.230549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.277 [2024-11-18 20:32:55.230670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:43.277 [2024-11-18 20:32:55.230751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.277 [2024-11-18 20:32:55.230747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:43.535 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:43.535 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:43.535 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:43.793 [2024-11-18 20:32:55.602511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.793 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:43.793 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:43.793 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.793 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:44.052 Malloc1 00:31:44.052 20:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:44.310 20:32:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:44.568 20:32:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:44.827 [2024-11-18 20:32:56.752584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.827 20:32:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:45.085 20:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:45.343 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:45.343 fio-3.35 00:31:45.343 Starting 1 thread 00:31:47.880 00:31:47.880 test: (groupid=0, jobs=1): err= 0: pid=354921: Mon Nov 18 20:32:59 2024 00:31:47.880 read: IOPS=8801, BW=34.4MiB/s (36.0MB/s)(69.0MiB/2006msec) 00:31:47.880 slat (nsec): min=1979, max=227473, avg=2585.50, stdev=2504.14 00:31:47.880 clat (usec): min=2880, max=13647, avg=7920.97, stdev=680.48 00:31:47.880 lat (usec): min=2918, max=13650, avg=7923.55, stdev=680.34 00:31:47.880 clat percentiles (usec): 00:31:47.880 | 1.00th=[ 6390], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7373], 00:31:47.880 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8094], 00:31:47.880 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:31:47.880 | 99.00th=[ 9372], 99.50th=[ 9503], 99.90th=[11863], 99.95th=[13173], 00:31:47.880 | 99.99th=[13566] 00:31:47.880 bw ( KiB/s): min=34328, max=35896, per=99.91%, avg=35174.00, stdev=644.20, samples=4 00:31:47.880 iops : min= 8582, max= 8974, avg=8793.50, stdev=161.05, samples=4 00:31:47.880 write: IOPS=8812, BW=34.4MiB/s (36.1MB/s)(69.1MiB/2006msec); 0 zone resets 00:31:47.880 slat (usec): min=2, max=220, avg= 2.75, stdev= 2.04 00:31:47.880 clat (usec): min=2098, max=12828, avg=6553.31, stdev=549.65 00:31:47.880 lat (usec): min=2111, max=12830, avg=6556.06, stdev=549.61 00:31:47.880 clat percentiles (usec): 00:31:47.880 | 1.00th=[ 5276], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:31:47.880 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6521], 60.00th=[ 6652], 00:31:47.880 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7373], 00:31:47.880 | 99.00th=[ 7767], 99.50th=[ 8029], 99.90th=[11076], 99.95th=[11600], 00:31:47.880 | 99.99th=[11863] 00:31:47.880 bw ( KiB/s): min=35072, max=35392, per=99.97%, avg=35236.00, stdev=132.18, samples=4 00:31:47.880 iops : min= 8768, max= 8848, avg=8809.00, stdev=33.05, samples=4 00:31:47.880 lat (msec) : 4=0.11%, 10=99.71%, 20=0.19% 00:31:47.880 cpu : usr=63.69%, sys=34.66%, ctx=51, majf=0, minf=41 00:31:47.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:47.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:47.880 issued rwts: total=17655,17677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:47.880 00:31:47.880 Run status group 0 (all jobs): 00:31:47.880 READ: bw=34.4MiB/s (36.0MB/s), 34.4MiB/s-34.4MiB/s (36.0MB/s-36.0MB/s), io=69.0MiB (72.3MB), run=2006-2006msec 00:31:47.880 WRITE: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.1MiB (72.4MB), run=2006-2006msec 00:31:47.880 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:47.880 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:47.880 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:47.880 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:47.880 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:47.880 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:47.880 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:47.880 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:47.880 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:47.880 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:47.881 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:47.881 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:47.881 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:47.881 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:47.881 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:47.881 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:47.881 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:47.881 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:47.881 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:47.881 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:47.881 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:47.881 20:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:47.881 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:47.881 fio-3.35 00:31:47.881 Starting 1 thread 00:31:50.410 00:31:50.410 test: (groupid=0, jobs=1): err= 0: pid=355304: Mon Nov 18 20:33:02 2024 00:31:50.410 read: IOPS=8355, BW=131MiB/s (137MB/s)(262MiB/2010msec) 00:31:50.410 slat (nsec): min=2821, max=96813, avg=3674.60, stdev=1823.60 00:31:50.410 clat (usec): min=2100, max=17350, avg=8884.56, stdev=2243.28 00:31:50.410 lat (usec): min=2104, max=17354, avg=8888.24, stdev=2243.32 00:31:50.410 clat percentiles (usec): 00:31:50.410 | 1.00th=[ 4752], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6980], 00:31:50.410 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9241], 00:31:50.410 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11731], 95.00th=[13173], 00:31:50.410 | 99.00th=[15008], 99.50th=[15533], 99.90th=[16057], 99.95th=[16188], 00:31:50.410 | 99.99th=[16909] 00:31:50.410 bw ( KiB/s): min=59232, max=74496, per=50.84%, avg=67960.00, stdev=7497.04, samples=4 00:31:50.410 iops : min= 3702, max= 4656, avg=4247.50, stdev=468.56, samples=4 00:31:50.410 write: IOPS=4819, BW=75.3MiB/s (79.0MB/s)(138MiB/1835msec); 0 zone resets 00:31:50.410 slat (usec): min=30, max=142, avg=34.15, stdev= 5.46 00:31:50.410 clat (usec): min=5023, max=19325, avg=11551.78, stdev=2075.39 00:31:50.410 lat (usec): min=5055, max=19356, avg=11585.94, stdev=2075.26 00:31:50.410 clat percentiles (usec): 00:31:50.410 | 1.00th=[ 7701], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:31:50.410 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:31:50.410 | 70.00th=[12387], 80.00th=[13304], 90.00th=[14484], 95.00th=[15401], 00:31:50.410 | 99.00th=[16909], 99.50th=[17695], 99.90th=[18744], 99.95th=[19006], 00:31:50.410 | 99.99th=[19268] 00:31:50.410 bw ( KiB/s): min=63008, max=77664, per=91.13%, avg=70272.00, stdev=7181.75, samples=4 00:31:50.410 iops : min= 3938, max= 4854, avg=4392.00, stdev=448.86, samples=4 00:31:50.410 lat (msec) : 4=0.22%, 10=55.74%, 20=44.04% 00:31:50.410 cpu : usr=76.90%, sys=21.80%, ctx=33, majf=0, minf=61 00:31:50.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:50.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:50.410 issued rwts: total=16794,8844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.410 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:50.410 00:31:50.410 Run status group 0 (all jobs): 00:31:50.410 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=262MiB (275MB), run=2010-2010msec 00:31:50.410 WRITE: bw=75.3MiB/s (79.0MB/s), 75.3MiB/s-75.3MiB/s (79.0MB/s-79.0MB/s), io=138MiB (145MB), run=1835-1835msec 00:31:50.410 20:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:50.668 20:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:50.668 20:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:50.668 20:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:50.668 20:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:50.668 20:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:50.668 20:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:50.668 20:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:50.668 20:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:50.668 20:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:50.668 20:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:50.668 20:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:53.955 Nvme0n1 00:31:53.955 20:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=886c79d9-1754-493f-a70f-25b18748c4c9 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 886c79d9-1754-493f-a70f-25b18748c4c9 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=886c79d9-1754-493f-a70f-25b18748c4c9 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:57.242 { 00:31:57.242 "uuid": "886c79d9-1754-493f-a70f-25b18748c4c9", 00:31:57.242 "name": "lvs_0", 00:31:57.242 "base_bdev": "Nvme0n1", 00:31:57.242 "total_data_clusters": 930, 00:31:57.242 "free_clusters": 930, 00:31:57.242 "block_size": 512, 00:31:57.242 "cluster_size": 1073741824 00:31:57.242 } 00:31:57.242 ]' 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="886c79d9-1754-493f-a70f-25b18748c4c9") .free_clusters' 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="886c79d9-1754-493f-a70f-25b18748c4c9") .cluster_size' 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:57.242 952320 00:31:57.242 20:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:57.500 6039d277-943c-42ad-8c32-833fe36e21d0 00:31:57.500 20:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:57.758 20:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:58.016 20:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:58.274 20:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:58.532 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:58.532 fio-3.35 00:31:58.532 Starting 1 thread 00:32:01.061 00:32:01.061 test: (groupid=0, jobs=1): err= 0: pid=357271: Mon Nov 18 20:33:12 2024 00:32:01.061 read: IOPS=5986, BW=23.4MiB/s (24.5MB/s)(46.9MiB/2007msec) 00:32:01.061 slat (nsec): min=1875, max=167522, avg=2538.52, stdev=2212.89 00:32:01.061 clat (usec): min=1045, max=171224, avg=11613.05, stdev=11651.81 00:32:01.061 lat (usec): min=1048, max=171269, avg=11615.59, stdev=11652.19 00:32:01.061 clat percentiles (msec): 00:32:01.061 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:32:01.061 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:32:01.061 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:32:01.061 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:32:01.061 | 99.99th=[ 171] 00:32:01.061 bw ( KiB/s): min=16592, max=26392, per=99.67%, avg=23866.00, stdev=4851.13, samples=4 00:32:01.061 iops : min= 4148, max= 6598, avg=5966.50, stdev=1212.78, samples=4 00:32:01.061 write: IOPS=5968, BW=23.3MiB/s (24.4MB/s)(46.8MiB/2007msec); 0 zone resets 00:32:01.061 slat (usec): min=2, max=139, avg= 2.64, stdev= 1.61 00:32:01.061 clat (usec): min=276, max=168959, avg=9596.33, stdev=10925.47 00:32:01.061 lat (usec): min=279, max=168966, avg=9598.97, stdev=10925.83 00:32:01.061 clat percentiles (msec): 00:32:01.061 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:32:01.061 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:32:01.061 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:32:01.061 | 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 169], 00:32:01.061 | 99.99th=[ 169] 00:32:01.061 bw ( KiB/s): min=17600, max=26048, per=99.97%, avg=23866.00, stdev=4179.42, samples=4 00:32:01.061 iops : min= 4400, max= 6512, avg=5966.50, stdev=1044.85, samples=4 00:32:01.061 lat (usec) : 500=0.01%, 750=0.01% 00:32:01.061 lat (msec) : 2=0.03%, 4=0.13%, 10=56.94%, 20=42.35%, 250=0.53% 00:32:01.061 cpu : usr=64.86%, sys=33.80%, ctx=94, majf=0, minf=41 00:32:01.061 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:32:01.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:01.061 issued rwts: total=12015,11978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.061 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:01.061 00:32:01.061 Run status group 0 (all jobs): 00:32:01.061 READ: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=46.9MiB (49.2MB), run=2007-2007msec 00:32:01.061 WRITE: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.8MiB (49.1MB), run=2007-2007msec 00:32:01.061 20:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:01.061 20:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:02.438 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=cfcc4a31-5e6e-429b-b1ea-e975561a2fb2 00:32:02.438 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb cfcc4a31-5e6e-429b-b1ea-e975561a2fb2 00:32:02.438 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=cfcc4a31-5e6e-429b-b1ea-e975561a2fb2 00:32:02.438 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:02.438 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:02.438 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:02.438 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:02.696 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:02.696 { 00:32:02.696 "uuid": "886c79d9-1754-493f-a70f-25b18748c4c9", 00:32:02.696 "name": "lvs_0", 00:32:02.696 "base_bdev": "Nvme0n1", 00:32:02.696 "total_data_clusters": 930, 00:32:02.696 "free_clusters": 0, 00:32:02.696 "block_size": 512, 00:32:02.696 "cluster_size": 1073741824 00:32:02.696 }, 00:32:02.696 { 00:32:02.696 "uuid": "cfcc4a31-5e6e-429b-b1ea-e975561a2fb2", 00:32:02.696 "name": "lvs_n_0", 00:32:02.696 "base_bdev": "6039d277-943c-42ad-8c32-833fe36e21d0", 00:32:02.696 "total_data_clusters": 237847, 00:32:02.696 "free_clusters": 237847, 00:32:02.696 "block_size": 512, 00:32:02.696 "cluster_size": 4194304 00:32:02.696 } 00:32:02.696 ]' 00:32:02.696 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="cfcc4a31-5e6e-429b-b1ea-e975561a2fb2") .free_clusters' 00:32:02.696 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:02.696 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="cfcc4a31-5e6e-429b-b1ea-e975561a2fb2") .cluster_size' 00:32:02.696 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:02.696 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:02.696 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:02.696 951388 00:32:02.696 20:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:03.264 e0a40fdf-6d22-44a9-980a-8b4e2e5037a8 00:32:03.264 20:33:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:03.830 20:33:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:03.830 20:33:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:04.398 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:04.398 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:04.398 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:04.398 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:04.398 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:04.399 20:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:04.399 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:04.399 fio-3.35 00:32:04.399 Starting 1 thread 00:32:06.928 00:32:06.928 test: (groupid=0, jobs=1): err= 0: pid=358008: Mon Nov 18 20:33:18 2024 00:32:06.928 read: IOPS=5740, BW=22.4MiB/s (23.5MB/s)(45.1MiB/2010msec) 00:32:06.928 slat (usec): min=2, max=149, avg= 2.62, stdev= 2.23 00:32:06.928 clat (usec): min=4531, max=20228, avg=12105.03, stdev=1121.17 00:32:06.928 lat (usec): min=4543, max=20230, avg=12107.65, stdev=1121.10 00:32:06.928 clat percentiles (usec): 00:32:06.928 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10683], 20.00th=[11207], 00:32:06.928 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:32:06.928 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13435], 95.00th=[13960], 00:32:06.928 | 99.00th=[14484], 99.50th=[14746], 99.90th=[17695], 99.95th=[19006], 00:32:06.928 | 99.99th=[20317] 00:32:06.928 bw ( KiB/s): min=21592, max=23568, per=99.96%, avg=22954.00, stdev=924.30, samples=4 00:32:06.928 iops : min= 5398, max= 5892, avg=5738.50, stdev=231.08, samples=4 00:32:06.928 write: IOPS=5730, BW=22.4MiB/s (23.5MB/s)(45.0MiB/2010msec); 0 zone resets 00:32:06.928 slat (usec): min=2, max=136, avg= 2.74, stdev= 1.76 00:32:06.928 clat (usec): min=2215, max=19945, avg=10000.70, stdev=958.95 00:32:06.928 lat (usec): min=2221, max=19948, avg=10003.45, stdev=958.89 00:32:06.928 clat percentiles (usec): 00:32:06.928 | 1.00th=[ 7963], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9241], 00:32:06.928 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:32:06.928 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:32:06.928 | 99.00th=[11994], 99.50th=[12387], 99.90th=[17695], 99.95th=[19268], 00:32:06.928 | 99.99th=[19792] 00:32:06.928 bw ( KiB/s): min=22616, max=23168, per=99.93%, avg=22908.00, stdev=234.47, samples=4 00:32:06.928 iops : min= 5654, max= 5792, avg=5727.00, stdev=58.62, samples=4 00:32:06.928 lat (msec) : 4=0.05%, 10=26.32%, 20=73.62%, 50=0.01% 00:32:06.928 cpu : usr=65.21%, sys=33.50%, ctx=102, majf=0, minf=41 00:32:06.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:32:06.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:06.928 issued rwts: total=11539,11519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.928 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:06.928 00:32:06.928 Run status group 0 (all jobs): 00:32:06.928 READ: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.1MiB (47.3MB), run=2010-2010msec 00:32:06.928 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.0MiB (47.2MB), run=2010-2010msec 00:32:06.928 20:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:07.186 20:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:07.186 20:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:11.376 20:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:11.376 20:33:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:14.668 20:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:14.668 20:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:16.568 rmmod nvme_tcp 00:32:16.568 rmmod nvme_fabrics 00:32:16.568 rmmod nvme_keyring 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 354558 ']' 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 354558 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 354558 ']' 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 354558 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354558 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354558' 00:32:16.568 killing process with pid 354558 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 354558 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 354558 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.568 20:33:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:19.106 00:32:19.106 real 0m38.055s 00:32:19.106 user 2m26.018s 00:32:19.106 sys 0m6.937s 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.106 ************************************ 00:32:19.106 END TEST nvmf_fio_host 00:32:19.106 ************************************ 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.106 ************************************ 00:32:19.106 START TEST nvmf_failover 00:32:19.106 ************************************ 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:19.106 * Looking for test storage... 00:32:19.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:19.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.106 --rc genhtml_branch_coverage=1 00:32:19.106 --rc genhtml_function_coverage=1 00:32:19.106 --rc genhtml_legend=1 00:32:19.106 --rc geninfo_all_blocks=1 00:32:19.106 --rc geninfo_unexecuted_blocks=1 00:32:19.106 00:32:19.106 ' 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:19.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.106 --rc genhtml_branch_coverage=1 00:32:19.106 --rc genhtml_function_coverage=1 00:32:19.106 --rc genhtml_legend=1 00:32:19.106 --rc geninfo_all_blocks=1 00:32:19.106 --rc geninfo_unexecuted_blocks=1 00:32:19.106 00:32:19.106 ' 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:19.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.106 --rc genhtml_branch_coverage=1 00:32:19.106 --rc genhtml_function_coverage=1 00:32:19.106 --rc genhtml_legend=1 00:32:19.106 --rc geninfo_all_blocks=1 00:32:19.106 --rc geninfo_unexecuted_blocks=1 00:32:19.106 00:32:19.106 ' 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:19.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.106 --rc genhtml_branch_coverage=1 00:32:19.106 --rc genhtml_function_coverage=1 00:32:19.106 --rc genhtml_legend=1 00:32:19.106 --rc geninfo_all_blocks=1 00:32:19.106 --rc geninfo_unexecuted_blocks=1 00:32:19.106 00:32:19.106 ' 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:19.106 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:19.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:19.107 20:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:21.013 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:21.014 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:21.014 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:21.014 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.014 20:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:21.014 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:21.014 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:21.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:21.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:32:21.273 00:32:21.273 --- 10.0.0.2 ping statistics --- 00:32:21.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.273 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:21.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:21.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:32:21.273 00:32:21.273 --- 10.0.0.1 ping statistics --- 00:32:21.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.273 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:21.273 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=361380 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 361380 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 361380 ']' 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.274 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:21.274 [2024-11-18 20:33:33.202539] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:32:21.274 [2024-11-18 20:33:33.202616] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.274 [2024-11-18 20:33:33.273694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:21.532 [2024-11-18 20:33:33.319313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:21.532 [2024-11-18 20:33:33.319367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:21.532 [2024-11-18 20:33:33.319395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:21.532 [2024-11-18 20:33:33.319406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:21.532 [2024-11-18 20:33:33.319416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:21.532 [2024-11-18 20:33:33.320876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:21.532 [2024-11-18 20:33:33.320942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:21.532 [2024-11-18 20:33:33.320946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.532 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.532 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:21.532 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:21.532 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:21.532 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:21.532 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:21.532 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:21.790 [2024-11-18 20:33:33.707024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.790 20:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:22.047 Malloc0 00:32:22.047 20:33:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:22.305 20:33:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:22.870 20:33:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:23.127 [2024-11-18 20:33:34.925491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.127 20:33:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:23.385 [2024-11-18 20:33:35.222378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:23.385 20:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:23.643 [2024-11-18 20:33:35.487354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:23.643 20:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=361667 00:32:23.643 20:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:23.643 20:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:23.643 20:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 361667 /var/tmp/bdevperf.sock 00:32:23.643 20:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 361667 ']' 00:32:23.643 20:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:23.643 20:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:23.643 20:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:23.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:23.643 20:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:23.643 20:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:23.901 20:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:23.901 20:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:23.901 20:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:24.470 NVMe0n1 00:32:24.470 20:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:24.730 00:32:24.730 20:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=361799 00:32:24.730 20:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:24.730 20:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:25.666 20:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:25.924 [2024-11-18 20:33:37.846465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 [2024-11-18 20:33:37.846565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 [2024-11-18 20:33:37.846597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 [2024-11-18 20:33:37.846609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 [2024-11-18 20:33:37.846622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 [2024-11-18 20:33:37.846635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 [2024-11-18 20:33:37.846657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 [2024-11-18 20:33:37.846679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 [2024-11-18 20:33:37.846693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 [2024-11-18 20:33:37.846704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 [2024-11-18 20:33:37.846716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 [2024-11-18 20:33:37.846727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 [2024-11-18 20:33:37.846739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 [2024-11-18 20:33:37.846751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 [2024-11-18 20:33:37.846762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747060 is same with the state(6) to be set 00:32:25.924 20:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:29.221 20:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:29.221 00:32:29.221 20:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:29.787 [2024-11-18 20:33:41.530019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.787 [2024-11-18 20:33:41.530075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.787 [2024-11-18 20:33:41.530091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.787 [2024-11-18 20:33:41.530104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.787 [2024-11-18 20:33:41.530117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.787 [2024-11-18 20:33:41.530129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.787 [2024-11-18 20:33:41.530142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.787 [2024-11-18 20:33:41.530154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.787 [2024-11-18 20:33:41.530167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 [2024-11-18 20:33:41.530958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7485a0 is same with the state(6) to be set 00:32:29.788 20:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:33.074 20:33:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:33.074 [2024-11-18 20:33:44.792951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:33.074 20:33:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:34.013 20:33:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:34.271 [2024-11-18 20:33:46.075600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 [2024-11-18 20:33:46.075695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 [2024-11-18 20:33:46.075714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 [2024-11-18 20:33:46.075727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 [2024-11-18 20:33:46.075739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 [2024-11-18 20:33:46.075752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 [2024-11-18 20:33:46.075764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 [2024-11-18 20:33:46.075776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 [2024-11-18 20:33:46.075788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 [2024-11-18 20:33:46.075809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 [2024-11-18 20:33:46.075824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 [2024-11-18 20:33:46.075836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 [2024-11-18 20:33:46.075848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 [2024-11-18 20:33:46.075860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 [2024-11-18 20:33:46.075872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7494d0 is same with the state(6) to be set 00:32:34.271 20:33:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 361799 00:32:40.840 { 00:32:40.841 "results": [ 00:32:40.841 { 00:32:40.841 "job": "NVMe0n1", 00:32:40.841 "core_mask": "0x1", 00:32:40.841 "workload": "verify", 00:32:40.841 "status": "finished", 00:32:40.841 "verify_range": { 00:32:40.841 "start": 0, 00:32:40.841 "length": 16384 00:32:40.841 }, 00:32:40.841 "queue_depth": 128, 00:32:40.841 "io_size": 4096, 00:32:40.841 "runtime": 15.013143, 00:32:40.841 "iops": 8388.450040074886, 00:32:40.841 "mibps": 32.76738296904252, 00:32:40.841 "io_failed": 8965, 00:32:40.841 "io_timeout": 0, 00:32:40.841 "avg_latency_us": 14217.90735604502, 00:32:40.841 "min_latency_us": 546.1333333333333, 00:32:40.841 "max_latency_us": 18350.08 00:32:40.841 } 00:32:40.841 ], 00:32:40.841 "core_count": 1 00:32:40.841 } 00:32:40.841 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 361667 00:32:40.841 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 361667 ']' 00:32:40.841 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 361667 00:32:40.841 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:40.841 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:40.841 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 361667 00:32:40.841 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:40.841 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:40.841 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 361667' 00:32:40.841 killing process with pid 361667 00:32:40.841 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 361667 00:32:40.841 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 361667 00:32:40.841 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:40.841 [2024-11-18 20:33:35.552674] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:32:40.841 [2024-11-18 20:33:35.552757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361667 ] 00:32:40.841 [2024-11-18 20:33:35.620563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.841 [2024-11-18 20:33:35.668747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.841 Running I/O for 15 seconds... 00:32:40.841 8584.00 IOPS, 33.53 MiB/s [2024-11-18T19:33:52.849Z] [2024-11-18 20:33:37.847185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.847971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.847989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.848004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.841 [2024-11-18 20:33:37.848018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.848033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.841 [2024-11-18 20:33:37.848047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.848061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.841 [2024-11-18 20:33:37.848075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.841 [2024-11-18 20:33:37.848089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.848976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.848990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.849003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.849017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.849031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.849045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.849059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.849074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.849087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.849101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.849118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.849133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.849146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.849161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.849174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.849188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.849201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.849216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.842 [2024-11-18 20:33:37.849229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.842 [2024-11-18 20:33:37.849243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.849976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.849991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.850019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.850047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.850075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.850103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.850130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.850158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.850187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.850223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.850252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.850280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.850307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.850336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.850369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.843 [2024-11-18 20:33:37.850397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.843 [2024-11-18 20:33:37.850411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.844 [2024-11-18 20:33:37.850439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.844 [2024-11-18 20:33:37.850467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.844 [2024-11-18 20:33:37.850494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.844 [2024-11-18 20:33:37.850522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.844 [2024-11-18 20:33:37.850550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.844 [2024-11-18 20:33:37.850589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.850618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.850672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.850703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.850737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.850766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.850794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.850822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.850851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.850885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.850913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.850942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.850971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.850986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.851003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.851019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.851033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.851048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.844 [2024-11-18 20:33:37.851062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.851076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x55b340 is same with the state(6) to be set 00:32:40.844 [2024-11-18 20:33:37.851093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.844 [2024-11-18 20:33:37.851105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.844 [2024-11-18 20:33:37.851116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78864 len:8 PRP1 0x0 PRP2 0x0 00:32:40.844 [2024-11-18 20:33:37.851129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.851200] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:40.844 [2024-11-18 20:33:37.851239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.844 [2024-11-18 20:33:37.851257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.851278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.844 [2024-11-18 20:33:37.851292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.851306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.844 [2024-11-18 20:33:37.851319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.851333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.844 [2024-11-18 20:33:37.851346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:37.851360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:40.844 [2024-11-18 20:33:37.851404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x53e3b0 (9): Bad file descriptor 00:32:40.844 [2024-11-18 20:33:37.854648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:40.844 [2024-11-18 20:33:37.884122] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:40.844 8425.50 IOPS, 32.91 MiB/s [2024-11-18T19:33:52.852Z] 8539.00 IOPS, 33.36 MiB/s [2024-11-18T19:33:52.852Z] 8529.25 IOPS, 33.32 MiB/s [2024-11-18T19:33:52.852Z] [2024-11-18 20:33:41.530004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.844 [2024-11-18 20:33:41.530080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:41.530099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.844 [2024-11-18 20:33:41.530130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:41.530146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.844 [2024-11-18 20:33:41.530160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:41.530174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.844 [2024-11-18 20:33:41.530188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:41.530201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e3b0 is same with the state(6) to be set 00:32:40.844 [2024-11-18 20:33:41.532759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.844 [2024-11-18 20:33:41.532786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:41.532813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.844 [2024-11-18 20:33:41.532829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:41.532845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.844 [2024-11-18 20:33:41.532859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.844 [2024-11-18 20:33:41.532874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.844 [2024-11-18 20:33:41.532888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.532903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.845 [2024-11-18 20:33:41.532917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.532932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.845 [2024-11-18 20:33:41.532945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.532960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.845 [2024-11-18 20:33:41.532974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.532988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.845 [2024-11-18 20:33:41.533002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.845 [2024-11-18 20:33:41.533030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.845 [2024-11-18 20:33:41.533064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.845 [2024-11-18 20:33:41.533093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.845 [2024-11-18 20:33:41.533122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.845 [2024-11-18 20:33:41.533150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.845 [2024-11-18 20:33:41.533178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.845 [2024-11-18 20:33:41.533207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.845 [2024-11-18 20:33:41.533236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.845 [2024-11-18 20:33:41.533265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.845 [2024-11-18 20:33:41.533292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.845 [2024-11-18 20:33:41.533813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.845 [2024-11-18 20:33:41.533827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.533842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.533856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.533870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.533884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.533899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.533913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.533928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.533941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.533956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.533970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.533985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.533999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.846 [2024-11-18 20:33:41.534964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.846 [2024-11-18 20:33:41.534979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.534992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.847 [2024-11-18 20:33:41.535825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.847 [2024-11-18 20:33:41.535872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78256 len:8 PRP1 0x0 PRP2 0x0 00:32:40.847 [2024-11-18 20:33:41.535885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.847 [2024-11-18 20:33:41.535916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.847 [2024-11-18 20:33:41.535928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:32:40.847 [2024-11-18 20:33:41.535941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.847 [2024-11-18 20:33:41.535964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.847 [2024-11-18 20:33:41.535974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:32:40.847 [2024-11-18 20:33:41.535987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.535999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.847 [2024-11-18 20:33:41.536010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.847 [2024-11-18 20:33:41.536021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:32:40.847 [2024-11-18 20:33:41.536037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.536051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.847 [2024-11-18 20:33:41.536062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.847 [2024-11-18 20:33:41.536072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78288 len:8 PRP1 0x0 PRP2 0x0 00:32:40.847 [2024-11-18 20:33:41.536085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.536097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.847 [2024-11-18 20:33:41.536108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.847 [2024-11-18 20:33:41.536118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:32:40.847 [2024-11-18 20:33:41.536131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.847 [2024-11-18 20:33:41.536143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78424 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.536908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.848 [2024-11-18 20:33:41.536919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.848 [2024-11-18 20:33:41.536930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78432 len:8 PRP1 0x0 PRP2 0x0 00:32:40.848 [2024-11-18 20:33:41.536943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:41.537012] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:40.848 [2024-11-18 20:33:41.537032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:40.848 [2024-11-18 20:33:41.540261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:40.848 [2024-11-18 20:33:41.540302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x53e3b0 (9): Bad file descriptor 00:32:40.848 [2024-11-18 20:33:41.563470] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:40.848 8463.20 IOPS, 33.06 MiB/s [2024-11-18T19:33:52.856Z] 8450.33 IOPS, 33.01 MiB/s [2024-11-18T19:33:52.856Z] 8434.71 IOPS, 32.95 MiB/s [2024-11-18T19:33:52.856Z] 8402.12 IOPS, 32.82 MiB/s [2024-11-18T19:33:52.856Z] 8384.33 IOPS, 32.75 MiB/s [2024-11-18T19:33:52.856Z] [2024-11-18 20:33:46.076300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.848 [2024-11-18 20:33:46.076340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:46.076366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.848 [2024-11-18 20:33:46.076381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:46.076397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.848 [2024-11-18 20:33:46.076412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:46.076427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.848 [2024-11-18 20:33:46.076450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:46.076466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.848 [2024-11-18 20:33:46.076479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.848 [2024-11-18 20:33:46.076494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.848 [2024-11-18 20:33:46.076508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.849 [2024-11-18 20:33:46.076535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.849 [2024-11-18 20:33:46.076563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.076591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.076634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.076674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.076704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.076732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.076761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.076790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.076819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.076853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.076881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.076911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.076954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.076984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.076999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.849 [2024-11-18 20:33:46.077515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.849 [2024-11-18 20:33:46.077542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.849 [2024-11-18 20:33:46.077560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.077574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.077589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.077603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.077633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.077656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.077671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.077686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.077701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.077715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.077730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.077743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.077758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.077772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.077787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.077801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.077816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.077829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.077844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.077858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.077873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.077887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.077902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.077916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.077930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.077963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.077979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.077992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.850 [2024-11-18 20:33:46.078567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.850 [2024-11-18 20:33:46.078580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.078594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.078607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.078621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.078635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.078677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.078691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.078710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.078724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.078739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.078753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.078767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.078781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.078795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.078809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.078824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.078837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.078852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.078866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.078880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.078893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.078908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.078921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.078936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.078949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.078964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.078977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.078992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.851 [2024-11-18 20:33:46.079438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.851 [2024-11-18 20:33:46.079468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.851 [2024-11-18 20:33:46.079496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.851 [2024-11-18 20:33:46.079525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.851 [2024-11-18 20:33:46.079554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.851 [2024-11-18 20:33:46.079582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.851 [2024-11-18 20:33:46.079611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.851 [2024-11-18 20:33:46.079648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.851 [2024-11-18 20:33:46.079707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.851 [2024-11-18 20:33:46.079721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.852 [2024-11-18 20:33:46.079735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.079750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.852 [2024-11-18 20:33:46.079764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.079779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.852 [2024-11-18 20:33:46.079796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.079811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.852 [2024-11-18 20:33:46.079825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.079840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.852 [2024-11-18 20:33:46.079854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.079869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.852 [2024-11-18 20:33:46.079882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.079897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.852 [2024-11-18 20:33:46.079911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.079926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.852 [2024-11-18 20:33:46.079940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.079955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.852 [2024-11-18 20:33:46.079969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.079985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.852 [2024-11-18 20:33:46.079999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.080014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.852 [2024-11-18 20:33:46.080027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.080042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.852 [2024-11-18 20:33:46.080055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.080070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.852 [2024-11-18 20:33:46.080084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.080114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.852 [2024-11-18 20:33:46.080130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.852 [2024-11-18 20:33:46.080142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123416 len:8 PRP1 0x0 PRP2 0x0 00:32:40.852 [2024-11-18 20:33:46.080155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.080221] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:40.852 [2024-11-18 20:33:46.080262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.852 [2024-11-18 20:33:46.080281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.080296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.852 [2024-11-18 20:33:46.080310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.080324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.852 [2024-11-18 20:33:46.080337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.080350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.852 [2024-11-18 20:33:46.080363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.852 [2024-11-18 20:33:46.080376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:40.852 [2024-11-18 20:33:46.080416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x53e3b0 (9): Bad file descriptor 00:32:40.852 [2024-11-18 20:33:46.083625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:40.852 [2024-11-18 20:33:46.242013] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:40.852 8252.80 IOPS, 32.24 MiB/s [2024-11-18T19:33:52.860Z] 8281.73 IOPS, 32.35 MiB/s [2024-11-18T19:33:52.860Z] 8317.42 IOPS, 32.49 MiB/s [2024-11-18T19:33:52.860Z] 8341.62 IOPS, 32.58 MiB/s [2024-11-18T19:33:52.860Z] 8369.57 IOPS, 32.69 MiB/s [2024-11-18T19:33:52.860Z] 8387.33 IOPS, 32.76 MiB/s 00:32:40.852 Latency(us) 00:32:40.852 [2024-11-18T19:33:52.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.852 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:40.852 Verification LBA range: start 0x0 length 0x4000 00:32:40.852 NVMe0n1 : 15.01 8388.45 32.77 597.14 0.00 14217.91 546.13 18350.08 00:32:40.852 [2024-11-18T19:33:52.860Z] =================================================================================================================== 00:32:40.852 [2024-11-18T19:33:52.860Z] Total : 8388.45 32.77 597.14 0.00 14217.91 546.13 18350.08 00:32:40.852 Received shutdown signal, test time was about 15.000000 seconds 00:32:40.852 00:32:40.852 Latency(us) 00:32:40.852 [2024-11-18T19:33:52.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.852 [2024-11-18T19:33:52.860Z] =================================================================================================================== 00:32:40.852 [2024-11-18T19:33:52.860Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:40.852 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:40.852 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:40.852 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:40.852 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=363589 00:32:40.852 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:40.852 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 363589 /var/tmp/bdevperf.sock 00:32:40.852 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 363589 ']' 00:32:40.852 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:40.852 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:40.852 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:40.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:40.852 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:40.852 20:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:40.852 20:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:40.852 20:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:40.852 20:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:40.852 [2024-11-18 20:33:52.527961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:40.852 20:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:40.852 [2024-11-18 20:33:52.788651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:40.852 20:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:41.419 NVMe0n1 00:32:41.419 20:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:41.677 00:32:41.677 20:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:42.242 00:32:42.242 20:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:42.242 20:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:42.242 20:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:42.810 20:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:46.097 20:33:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:46.097 20:33:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:46.097 20:33:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=364299 00:32:46.097 20:33:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:46.097 20:33:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 364299 00:32:47.030 { 00:32:47.030 "results": [ 00:32:47.030 { 00:32:47.030 "job": "NVMe0n1", 00:32:47.030 "core_mask": "0x1", 00:32:47.030 "workload": "verify", 00:32:47.030 "status": "finished", 00:32:47.030 "verify_range": { 00:32:47.030 "start": 0, 00:32:47.030 "length": 16384 00:32:47.030 }, 00:32:47.030 "queue_depth": 128, 00:32:47.030 "io_size": 4096, 00:32:47.030 "runtime": 1.046434, 00:32:47.030 "iops": 8139.070404822473, 00:32:47.030 "mibps": 31.793243768837787, 00:32:47.030 "io_failed": 0, 00:32:47.030 "io_timeout": 0, 00:32:47.030 "avg_latency_us": 15080.18245617697, 00:32:47.030 "min_latency_us": 2609.303703703704, 00:32:47.030 "max_latency_us": 41748.85925925926 00:32:47.030 } 00:32:47.030 ], 00:32:47.030 "core_count": 1 00:32:47.030 } 00:32:47.030 20:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:47.030 [2024-11-18 20:33:52.041726] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:32:47.030 [2024-11-18 20:33:52.041828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363589 ] 00:32:47.030 [2024-11-18 20:33:52.111361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.030 [2024-11-18 20:33:52.156652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.030 [2024-11-18 20:33:54.497484] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:47.031 [2024-11-18 20:33:54.497568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.031 [2024-11-18 20:33:54.497592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.031 [2024-11-18 20:33:54.497633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.031 [2024-11-18 20:33:54.497657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.031 [2024-11-18 20:33:54.497672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.031 [2024-11-18 20:33:54.497686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.031 [2024-11-18 20:33:54.497700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.031 [2024-11-18 20:33:54.497714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.031 [2024-11-18 20:33:54.497728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:47.031 [2024-11-18 20:33:54.497774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:47.031 [2024-11-18 20:33:54.497806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21753b0 (9): Bad file descriptor 00:32:47.031 [2024-11-18 20:33:54.548041] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:47.031 Running I/O for 1 seconds... 00:32:47.031 8382.00 IOPS, 32.74 MiB/s 00:32:47.031 Latency(us) 00:32:47.031 [2024-11-18T19:33:59.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.031 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.031 Verification LBA range: start 0x0 length 0x4000 00:32:47.031 NVMe0n1 : 1.05 8139.07 31.79 0.00 0.00 15080.18 2609.30 41748.86 00:32:47.031 [2024-11-18T19:33:59.039Z] =================================================================================================================== 00:32:47.031 [2024-11-18T19:33:59.039Z] Total : 8139.07 31.79 0.00 0.00 15080.18 2609.30 41748.86 00:32:47.031 20:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:47.031 20:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:47.289 20:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:47.546 20:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:47.546 20:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:47.803 20:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:48.370 20:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 363589 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 363589 ']' 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 363589 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 363589 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 363589' 00:32:51.657 killing process with pid 363589 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 363589 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 363589 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:51.657 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:52.225 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:52.225 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:52.225 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:52.225 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:52.225 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:52.225 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:52.225 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:52.225 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:52.225 20:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:52.225 rmmod nvme_tcp 00:32:52.225 rmmod nvme_fabrics 00:32:52.225 rmmod nvme_keyring 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 361380 ']' 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 361380 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 361380 ']' 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 361380 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 361380 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 361380' 00:32:52.225 killing process with pid 361380 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 361380 00:32:52.225 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 361380 00:32:52.484 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:52.484 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:52.484 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:52.484 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:52.484 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:52.484 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:52.484 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:52.484 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:52.484 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:52.484 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.484 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:52.484 20:34:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.391 20:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:54.391 00:32:54.391 real 0m35.681s 00:32:54.391 user 2m5.824s 00:32:54.391 sys 0m5.927s 00:32:54.391 20:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:54.391 20:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:54.391 ************************************ 00:32:54.391 END TEST nvmf_failover 00:32:54.391 ************************************ 00:32:54.391 20:34:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:54.391 20:34:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:54.391 20:34:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:54.391 20:34:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.391 ************************************ 00:32:54.391 START TEST nvmf_host_discovery 00:32:54.391 ************************************ 00:32:54.391 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:54.650 * Looking for test storage... 00:32:54.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:54.650 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:54.650 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:32:54.650 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:54.650 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:54.650 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:54.650 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:54.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.651 --rc genhtml_branch_coverage=1 00:32:54.651 --rc genhtml_function_coverage=1 00:32:54.651 --rc genhtml_legend=1 00:32:54.651 --rc geninfo_all_blocks=1 00:32:54.651 --rc geninfo_unexecuted_blocks=1 00:32:54.651 00:32:54.651 ' 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:54.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.651 --rc genhtml_branch_coverage=1 00:32:54.651 --rc genhtml_function_coverage=1 00:32:54.651 --rc genhtml_legend=1 00:32:54.651 --rc geninfo_all_blocks=1 00:32:54.651 --rc geninfo_unexecuted_blocks=1 00:32:54.651 00:32:54.651 ' 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:54.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.651 --rc genhtml_branch_coverage=1 00:32:54.651 --rc genhtml_function_coverage=1 00:32:54.651 --rc genhtml_legend=1 00:32:54.651 --rc geninfo_all_blocks=1 00:32:54.651 --rc geninfo_unexecuted_blocks=1 00:32:54.651 00:32:54.651 ' 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:54.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.651 --rc genhtml_branch_coverage=1 00:32:54.651 --rc genhtml_function_coverage=1 00:32:54.651 --rc genhtml_legend=1 00:32:54.651 --rc geninfo_all_blocks=1 00:32:54.651 --rc geninfo_unexecuted_blocks=1 00:32:54.651 00:32:54.651 ' 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:54.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:54.651 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:54.652 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:54.652 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:54.652 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.652 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.652 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.652 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:54.652 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:54.652 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:54.652 20:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:57.188 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:57.188 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:57.188 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:57.188 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.188 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:57.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:32:57.189 00:32:57.189 --- 10.0.0.2 ping statistics --- 00:32:57.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.189 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:32:57.189 00:32:57.189 --- 10.0.0.1 ping statistics --- 00:32:57.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.189 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=366913 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 366913 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 366913 ']' 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:57.189 20:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.189 [2024-11-18 20:34:08.872803] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:32:57.189 [2024-11-18 20:34:08.872874] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.189 [2024-11-18 20:34:08.943207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.189 [2024-11-18 20:34:08.988774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.189 [2024-11-18 20:34:08.988831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.189 [2024-11-18 20:34:08.988859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.189 [2024-11-18 20:34:08.988870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.189 [2024-11-18 20:34:08.988880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.189 [2024-11-18 20:34:08.989563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.189 [2024-11-18 20:34:09.135896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.189 [2024-11-18 20:34:09.144136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.189 null0 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.189 null1 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=367055 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 367055 /tmp/host.sock 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 367055 ']' 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:57.189 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:57.189 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.447 [2024-11-18 20:34:09.216878] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:32:57.447 [2024-11-18 20:34:09.216960] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367055 ] 00:32:57.447 [2024-11-18 20:34:09.281680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.448 [2024-11-18 20:34:09.327342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.448 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.448 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:57.448 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:57.448 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:57.448 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.448 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.448 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.448 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:57.448 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.448 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.706 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:57.707 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.967 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.968 [2024-11-18 20:34:09.721634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:57.968 20:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:58.535 [2024-11-18 20:34:10.463111] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:58.535 [2024-11-18 20:34:10.463140] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:58.535 [2024-11-18 20:34:10.463162] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:58.793 [2024-11-18 20:34:10.549433] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:58.793 [2024-11-18 20:34:10.611197] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:58.793 [2024-11-18 20:34:10.612089] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1fb01b0:1 started. 00:32:58.793 [2024-11-18 20:34:10.613818] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:58.793 [2024-11-18 20:34:10.613840] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:58.793 [2024-11-18 20:34:10.621376] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1fb01b0 was disconnected and freed. delete nvme_qpair. 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:59.051 20:34:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.051 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:59.051 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:59.051 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.052 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.310 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.310 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:59.310 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:59.310 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:59.310 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:59.310 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:59.310 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:59.310 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:59.310 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.310 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:59.310 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.310 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:59.310 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:59.310 [2024-11-18 20:34:11.309519] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f9a2d0:1 started. 00:32:59.310 [2024-11-18 20:34:11.312946] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f9a2d0 was disconnected and freed. delete nvme_qpair. 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.569 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.570 [2024-11-18 20:34:11.374972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:59.570 [2024-11-18 20:34:11.375179] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:59.570 [2024-11-18 20:34:11.375209] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.570 [2024-11-18 20:34:11.460789] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:59.570 20:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:59.570 [2024-11-18 20:34:11.519474] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:59.570 [2024-11-18 20:34:11.519519] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:59.570 [2024-11-18 20:34:11.519534] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:59.570 [2024-11-18 20:34:11.519542] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.952 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.952 [2024-11-18 20:34:12.611960] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:00.952 [2024-11-18 20:34:12.612018] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:00.952 [2024-11-18 20:34:12.614288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.952 [2024-11-18 20:34:12.614337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.952 [2024-11-18 20:34:12.614356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.952 [2024-11-18 20:34:12.614370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.952 [2024-11-18 20:34:12.614385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.952 [2024-11-18 20:34:12.614399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.952 [2024-11-18 20:34:12.614419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.952 [2024-11-18 20:34:12.614434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.952 [2024-11-18 20:34:12.614447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f821f0 is same with the state(6) to be set 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:00.953 [2024-11-18 20:34:12.624276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f821f0 (9): Bad file descriptor 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.953 [2024-11-18 20:34:12.634324] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.953 [2024-11-18 20:34:12.634346] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.953 [2024-11-18 20:34:12.634356] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.953 [2024-11-18 20:34:12.634365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.953 [2024-11-18 20:34:12.634413] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:00.953 [2024-11-18 20:34:12.634617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.953 [2024-11-18 20:34:12.634655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f821f0 with addr=10.0.0.2, port=4420 00:33:00.953 [2024-11-18 20:34:12.634675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f821f0 is same with the state(6) to be set 00:33:00.953 [2024-11-18 20:34:12.634698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f821f0 (9): Bad file descriptor 00:33:00.953 [2024-11-18 20:34:12.634733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:00.953 [2024-11-18 20:34:12.634751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:00.953 [2024-11-18 20:34:12.634769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:00.953 [2024-11-18 20:34:12.634782] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:00.953 [2024-11-18 20:34:12.634793] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:00.953 [2024-11-18 20:34:12.634806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:00.953 [2024-11-18 20:34:12.644446] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.953 [2024-11-18 20:34:12.644466] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.953 [2024-11-18 20:34:12.644474] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.953 [2024-11-18 20:34:12.644481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.953 [2024-11-18 20:34:12.644518] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:00.953 [2024-11-18 20:34:12.644735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.953 [2024-11-18 20:34:12.644763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f821f0 with addr=10.0.0.2, port=4420 00:33:00.953 [2024-11-18 20:34:12.644780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f821f0 is same with the state(6) to be set 00:33:00.953 [2024-11-18 20:34:12.644802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f821f0 (9): Bad file descriptor 00:33:00.953 [2024-11-18 20:34:12.644822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:00.953 [2024-11-18 20:34:12.644837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:00.953 [2024-11-18 20:34:12.644850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:00.953 [2024-11-18 20:34:12.644862] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:00.953 [2024-11-18 20:34:12.644871] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:00.953 [2024-11-18 20:34:12.644878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:00.953 [2024-11-18 20:34:12.654553] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.953 [2024-11-18 20:34:12.654574] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.953 [2024-11-18 20:34:12.654583] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.953 [2024-11-18 20:34:12.654591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.953 [2024-11-18 20:34:12.654629] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:00.953 [2024-11-18 20:34:12.654791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.953 [2024-11-18 20:34:12.654819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f821f0 with addr=10.0.0.2, port=4420 00:33:00.953 [2024-11-18 20:34:12.654835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f821f0 is same with the state(6) to be set 00:33:00.953 [2024-11-18 20:34:12.654857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f821f0 (9): Bad file descriptor 00:33:00.953 [2024-11-18 20:34:12.654889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:00.953 [2024-11-18 20:34:12.654906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:00.953 [2024-11-18 20:34:12.654919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:00.953 [2024-11-18 20:34:12.654931] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:00.953 [2024-11-18 20:34:12.654947] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:00.953 [2024-11-18 20:34:12.654956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:00.953 [2024-11-18 20:34:12.664665] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.953 [2024-11-18 20:34:12.664689] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.953 [2024-11-18 20:34:12.664699] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.953 [2024-11-18 20:34:12.664707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.953 [2024-11-18 20:34:12.664732] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:00.953 [2024-11-18 20:34:12.664859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.953 [2024-11-18 20:34:12.664898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f821f0 with addr=10.0.0.2, port=4420 00:33:00.953 [2024-11-18 20:34:12.664916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f821f0 is same with the state(6) to be set 00:33:00.953 [2024-11-18 20:34:12.664939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f821f0 (9): Bad file descriptor 00:33:00.953 [2024-11-18 20:34:12.664960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:00.953 [2024-11-18 20:34:12.664974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:00.953 [2024-11-18 20:34:12.664988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:00.953 [2024-11-18 20:34:12.665000] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:00.953 [2024-11-18 20:34:12.665009] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:00.953 [2024-11-18 20:34:12.665017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:00.953 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:00.953 [2024-11-18 20:34:12.674767] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.953 [2024-11-18 20:34:12.674791] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.953 [2024-11-18 20:34:12.674807] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.953 [2024-11-18 20:34:12.674815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.954 [2024-11-18 20:34:12.674841] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:00.954 [2024-11-18 20:34:12.674959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.954 [2024-11-18 20:34:12.674986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f821f0 with addr=10.0.0.2, port=4420 00:33:00.954 [2024-11-18 20:34:12.675002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f821f0 is same with the state(6) to be set 00:33:00.954 [2024-11-18 20:34:12.675024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f821f0 (9): Bad file descriptor 00:33:00.954 [2024-11-18 20:34:12.675056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:00.954 [2024-11-18 20:34:12.675073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:00.954 [2024-11-18 20:34:12.675086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:00.954 [2024-11-18 20:34:12.675098] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:00.954 [2024-11-18 20:34:12.675106] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:00.954 [2024-11-18 20:34:12.675114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:00.954 [2024-11-18 20:34:12.684876] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.954 [2024-11-18 20:34:12.684898] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.954 [2024-11-18 20:34:12.684907] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.954 [2024-11-18 20:34:12.684915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.954 [2024-11-18 20:34:12.684939] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:00.954 [2024-11-18 20:34:12.685098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.954 [2024-11-18 20:34:12.685125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f821f0 with addr=10.0.0.2, port=4420 00:33:00.954 [2024-11-18 20:34:12.685141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f821f0 is same with the state(6) to be set 00:33:00.954 [2024-11-18 20:34:12.685163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f821f0 (9): Bad file descriptor 00:33:00.954 [2024-11-18 20:34:12.685183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:00.954 [2024-11-18 20:34:12.685197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:00.954 [2024-11-18 20:34:12.685210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:00.954 [2024-11-18 20:34:12.685221] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:00.954 [2024-11-18 20:34:12.685230] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:00.954 [2024-11-18 20:34:12.685237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.954 [2024-11-18 20:34:12.694974] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.954 [2024-11-18 20:34:12.694995] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.954 [2024-11-18 20:34:12.695019] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.954 [2024-11-18 20:34:12.695026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.954 [2024-11-18 20:34:12.695049] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:00.954 [2024-11-18 20:34:12.695227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.954 [2024-11-18 20:34:12.695253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f821f0 with addr=10.0.0.2, port=4420 00:33:00.954 [2024-11-18 20:34:12.695269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f821f0 is same with the state(6) to be set 00:33:00.954 [2024-11-18 20:34:12.695301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f821f0 (9): Bad file descriptor 00:33:00.954 [2024-11-18 20:34:12.695324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:00.954 [2024-11-18 20:34:12.695338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:00.954 [2024-11-18 20:34:12.695351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:00.954 [2024-11-18 20:34:12.695363] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:00.954 [2024-11-18 20:34:12.695372] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:00.954 [2024-11-18 20:34:12.695380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:00.954 [2024-11-18 20:34:12.697842] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:00.954 [2024-11-18 20:34:12.697873] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.954 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.955 20:34:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.335 [2024-11-18 20:34:13.982258] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:02.335 [2024-11-18 20:34:13.982290] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:02.335 [2024-11-18 20:34:13.982313] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:02.335 [2024-11-18 20:34:14.109719] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:02.335 [2024-11-18 20:34:14.215555] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:02.335 [2024-11-18 20:34:14.216450] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1f8cd60:1 started. 00:33:02.335 [2024-11-18 20:34:14.218469] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:02.336 [2024-11-18 20:34:14.218507] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.336 request: 00:33:02.336 { 00:33:02.336 "name": "nvme", 00:33:02.336 "trtype": "tcp", 00:33:02.336 "traddr": "10.0.0.2", 00:33:02.336 "adrfam": "ipv4", 00:33:02.336 "trsvcid": "8009", 00:33:02.336 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:02.336 "wait_for_attach": true, 00:33:02.336 "method": "bdev_nvme_start_discovery", 00:33:02.336 "req_id": 1 00:33:02.336 } 00:33:02.336 Got JSON-RPC error response 00:33:02.336 response: 00:33:02.336 { 00:33:02.336 "code": -17, 00:33:02.336 "message": "File exists" 00:33:02.336 } 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:02.336 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.337 [2024-11-18 20:34:14.261314] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1f8cd60 was disconnected and freed. delete nvme_qpair. 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.337 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.338 request: 00:33:02.338 { 00:33:02.338 "name": "nvme_second", 00:33:02.338 "trtype": "tcp", 00:33:02.338 "traddr": "10.0.0.2", 00:33:02.338 "adrfam": "ipv4", 00:33:02.338 "trsvcid": "8009", 00:33:02.338 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:02.338 "wait_for_attach": true, 00:33:02.338 "method": "bdev_nvme_start_discovery", 00:33:02.338 "req_id": 1 00:33:02.338 } 00:33:02.338 Got JSON-RPC error response 00:33:02.338 response: 00:33:02.338 { 00:33:02.338 "code": -17, 00:33:02.338 "message": "File exists" 00:33:02.338 } 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:02.338 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.339 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.597 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:02.598 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.598 20:34:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.533 [2024-11-18 20:34:15.418016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-11-18 20:34:15.418058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f957e0 with addr=10.0.0.2, port=8010 00:33:03.533 [2024-11-18 20:34:15.418083] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:03.533 [2024-11-18 20:34:15.418098] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:03.533 [2024-11-18 20:34:15.418110] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:04.497 [2024-11-18 20:34:16.420432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.497 [2024-11-18 20:34:16.420479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1faf4f0 with addr=10.0.0.2, port=8010 00:33:04.497 [2024-11-18 20:34:16.420510] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:04.497 [2024-11-18 20:34:16.420524] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:04.497 [2024-11-18 20:34:16.420536] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:05.550 [2024-11-18 20:34:17.422664] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:05.550 request: 00:33:05.550 { 00:33:05.550 "name": "nvme_second", 00:33:05.550 "trtype": "tcp", 00:33:05.550 "traddr": "10.0.0.2", 00:33:05.550 "adrfam": "ipv4", 00:33:05.550 "trsvcid": "8010", 00:33:05.550 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:05.550 "wait_for_attach": false, 00:33:05.550 "attach_timeout_ms": 3000, 00:33:05.550 "method": "bdev_nvme_start_discovery", 00:33:05.550 "req_id": 1 00:33:05.550 } 00:33:05.550 Got JSON-RPC error response 00:33:05.550 response: 00:33:05.550 { 00:33:05.550 "code": -110, 00:33:05.550 "message": "Connection timed out" 00:33:05.550 } 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 367055 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:05.550 rmmod nvme_tcp 00:33:05.550 rmmod nvme_fabrics 00:33:05.550 rmmod nvme_keyring 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 366913 ']' 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 366913 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 366913 ']' 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 366913 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.550 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 366913 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 366913' 00:33:05.857 killing process with pid 366913 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 366913 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 366913 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.857 20:34:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.415 20:34:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:08.415 00:33:08.415 real 0m13.434s 00:33:08.415 user 0m19.278s 00:33:08.415 sys 0m2.905s 00:33:08.415 20:34:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:08.415 20:34:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.415 ************************************ 00:33:08.416 END TEST nvmf_host_discovery 00:33:08.416 ************************************ 00:33:08.416 20:34:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:08.416 20:34:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:08.416 20:34:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:08.416 20:34:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.416 ************************************ 00:33:08.416 START TEST nvmf_host_multipath_status 00:33:08.416 ************************************ 00:33:08.416 20:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:08.416 * Looking for test storage... 00:33:08.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:08.416 20:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:08.416 20:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:33:08.416 20:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:08.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.416 --rc genhtml_branch_coverage=1 00:33:08.416 --rc genhtml_function_coverage=1 00:33:08.416 --rc genhtml_legend=1 00:33:08.416 --rc geninfo_all_blocks=1 00:33:08.416 --rc geninfo_unexecuted_blocks=1 00:33:08.416 00:33:08.416 ' 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:08.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.416 --rc genhtml_branch_coverage=1 00:33:08.416 --rc genhtml_function_coverage=1 00:33:08.416 --rc genhtml_legend=1 00:33:08.416 --rc geninfo_all_blocks=1 00:33:08.416 --rc geninfo_unexecuted_blocks=1 00:33:08.416 00:33:08.416 ' 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:08.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.416 --rc genhtml_branch_coverage=1 00:33:08.416 --rc genhtml_function_coverage=1 00:33:08.416 --rc genhtml_legend=1 00:33:08.416 --rc geninfo_all_blocks=1 00:33:08.416 --rc geninfo_unexecuted_blocks=1 00:33:08.416 00:33:08.416 ' 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:08.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.416 --rc genhtml_branch_coverage=1 00:33:08.416 --rc genhtml_function_coverage=1 00:33:08.416 --rc genhtml_legend=1 00:33:08.416 --rc geninfo_all_blocks=1 00:33:08.416 --rc geninfo_unexecuted_blocks=1 00:33:08.416 00:33:08.416 ' 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:08.416 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:08.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:08.417 20:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:10.325 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:10.325 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:10.325 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:10.325 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:10.325 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:10.326 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:10.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:33:10.585 00:33:10.585 --- 10.0.0.2 ping statistics --- 00:33:10.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.585 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:10.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:33:10.585 00:33:10.585 --- 10.0.0.1 ping statistics --- 00:33:10.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.585 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=370111 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 370111 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 370111 ']' 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.585 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:10.585 [2024-11-18 20:34:22.423061] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:33:10.585 [2024-11-18 20:34:22.423153] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.585 [2024-11-18 20:34:22.494560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:10.585 [2024-11-18 20:34:22.536933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.585 [2024-11-18 20:34:22.536993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.585 [2024-11-18 20:34:22.537020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.585 [2024-11-18 20:34:22.537031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.585 [2024-11-18 20:34:22.537041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.585 [2024-11-18 20:34:22.538420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.585 [2024-11-18 20:34:22.538425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.844 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.844 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:10.844 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:10.844 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:10.844 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:10.844 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.844 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=370111 00:33:10.844 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:11.102 [2024-11-18 20:34:22.934044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.102 20:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:11.361 Malloc0 00:33:11.361 20:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:11.619 20:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:12.186 20:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:12.443 [2024-11-18 20:34:24.233213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.444 20:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:12.702 [2024-11-18 20:34:24.513980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:12.702 20:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=370394 00:33:12.703 20:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:12.703 20:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:12.703 20:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 370394 /var/tmp/bdevperf.sock 00:33:12.703 20:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 370394 ']' 00:33:12.703 20:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:12.703 20:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.703 20:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:12.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:12.703 20:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.703 20:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:12.961 20:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.961 20:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:12.961 20:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:13.219 20:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:13.477 Nvme0n1 00:33:13.477 20:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:14.043 Nvme0n1 00:33:14.043 20:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:14.043 20:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:15.944 20:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:15.944 20:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:16.203 20:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:16.770 20:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:17.711 20:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:17.711 20:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:17.711 20:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.711 20:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:17.970 20:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.970 20:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:17.970 20:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.970 20:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:18.228 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:18.228 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:18.228 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.228 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:18.487 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.487 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:18.487 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.487 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:18.746 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.746 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:18.746 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.746 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:19.004 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.004 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:19.004 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.004 20:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:19.263 20:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.263 20:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:19.263 20:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:19.521 20:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:19.780 20:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:20.716 20:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:20.716 20:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:20.716 20:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.716 20:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:21.282 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:21.282 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:21.282 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.282 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:21.282 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.282 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:21.282 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.282 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:21.851 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.851 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:21.851 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.851 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:21.851 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.851 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:21.851 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.851 20:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:22.109 20:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.109 20:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:22.109 20:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.109 20:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:22.676 20:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.676 20:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:22.676 20:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:22.676 20:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:22.936 20:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:24.312 20:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:24.312 20:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:24.312 20:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.312 20:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:24.312 20:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.312 20:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:24.312 20:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.312 20:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:24.570 20:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:24.570 20:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:24.570 20:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.570 20:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:24.828 20:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.828 20:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:24.828 20:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.828 20:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:25.085 20:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.085 20:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:25.085 20:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.085 20:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:25.343 20:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.343 20:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:25.343 20:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.343 20:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:25.601 20:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.601 20:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:25.601 20:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:25.860 20:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:26.427 20:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:27.364 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:27.364 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:27.364 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.364 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:27.622 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.622 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:27.622 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.622 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:27.880 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:27.880 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:27.880 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.880 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:28.137 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.137 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:28.137 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.137 20:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:28.395 20:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.395 20:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:28.395 20:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.395 20:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:28.653 20:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.653 20:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:28.653 20:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.653 20:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:28.911 20:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:28.911 20:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:28.911 20:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:29.168 20:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:29.426 20:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:30.803 20:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:30.803 20:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:30.803 20:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.803 20:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:30.803 20:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:30.803 20:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:30.803 20:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.803 20:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:31.061 20:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:31.061 20:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:31.061 20:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.061 20:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:31.319 20:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.319 20:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:31.319 20:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.319 20:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:31.577 20:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.577 20:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:31.577 20:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.577 20:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:31.836 20:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:31.836 20:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:31.836 20:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.836 20:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:32.094 20:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:32.094 20:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:32.094 20:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:32.353 20:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:32.612 20:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:33.994 20:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:33.994 20:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:33.994 20:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.994 20:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:33.994 20:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:33.994 20:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:33.994 20:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.994 20:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:34.252 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.252 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:34.252 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.252 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:34.510 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.510 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:34.510 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.510 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:34.768 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.768 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:34.768 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.768 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:35.026 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:35.026 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:35.026 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.026 20:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:35.285 20:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.285 20:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:35.543 20:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:35.543 20:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:35.801 20:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:36.061 20:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:37.438 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:37.439 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:37.439 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.439 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:37.439 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.439 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:37.439 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.439 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:37.695 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.695 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:37.695 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.695 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:37.953 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.953 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:37.953 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.953 20:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:38.211 20:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.211 20:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:38.211 20:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.211 20:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:38.469 20:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.469 20:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:38.469 20:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.469 20:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:39.035 20:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.035 20:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:39.035 20:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:39.293 20:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:39.552 20:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:40.493 20:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:40.493 20:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:40.493 20:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.493 20:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:40.751 20:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:40.751 20:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:40.751 20:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.751 20:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:41.009 20:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.009 20:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:41.009 20:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.009 20:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:41.267 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.267 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:41.267 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.267 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:41.525 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.525 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:41.525 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.525 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:41.783 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.783 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:41.783 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.783 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:42.041 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.041 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:42.041 20:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:42.300 20:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:42.559 20:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:43.937 20:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:43.937 20:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:43.937 20:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.937 20:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:43.937 20:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.937 20:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:43.937 20:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.937 20:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:44.196 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.196 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:44.196 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.196 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:44.455 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.455 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:44.455 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.455 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:44.714 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.714 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:44.714 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.714 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:44.972 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.972 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:44.972 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.972 20:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:45.230 20:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.230 20:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:45.230 20:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:45.488 20:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:46.059 20:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:46.998 20:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:46.998 20:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:46.998 20:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.998 20:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:47.255 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.255 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:47.255 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.255 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:47.514 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:47.514 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:47.514 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.514 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:47.772 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.772 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:47.772 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.772 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:48.031 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.031 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:48.031 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.031 20:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:48.290 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.290 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:48.290 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.290 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:48.550 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:48.550 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 370394 00:33:48.550 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 370394 ']' 00:33:48.550 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 370394 00:33:48.550 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:48.550 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:48.550 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 370394 00:33:48.550 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:48.550 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:48.550 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 370394' 00:33:48.550 killing process with pid 370394 00:33:48.550 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 370394 00:33:48.550 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 370394 00:33:48.550 { 00:33:48.550 "results": [ 00:33:48.550 { 00:33:48.550 "job": "Nvme0n1", 00:33:48.550 "core_mask": "0x4", 00:33:48.550 "workload": "verify", 00:33:48.550 "status": "terminated", 00:33:48.550 "verify_range": { 00:33:48.550 "start": 0, 00:33:48.550 "length": 16384 00:33:48.550 }, 00:33:48.550 "queue_depth": 128, 00:33:48.550 "io_size": 4096, 00:33:48.550 "runtime": 34.428895, 00:33:48.550 "iops": 7852.793416692578, 00:33:48.550 "mibps": 30.674974283955383, 00:33:48.550 "io_failed": 0, 00:33:48.550 "io_timeout": 0, 00:33:48.550 "avg_latency_us": 16250.154046971968, 00:33:48.550 "min_latency_us": 476.34962962962965, 00:33:48.550 "max_latency_us": 4076242.1096296296 00:33:48.550 } 00:33:48.550 ], 00:33:48.550 "core_count": 1 00:33:48.550 } 00:33:48.826 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 370394 00:33:48.826 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:48.826 [2024-11-18 20:34:24.580429] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:33:48.826 [2024-11-18 20:34:24.580514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid370394 ] 00:33:48.826 [2024-11-18 20:34:24.646246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.826 [2024-11-18 20:34:24.691516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:48.826 Running I/O for 90 seconds... 00:33:48.826 8264.00 IOPS, 32.28 MiB/s [2024-11-18T19:35:00.834Z] 8316.50 IOPS, 32.49 MiB/s [2024-11-18T19:35:00.834Z] 8338.00 IOPS, 32.57 MiB/s [2024-11-18T19:35:00.834Z] 8343.75 IOPS, 32.59 MiB/s [2024-11-18T19:35:00.834Z] 8269.60 IOPS, 32.30 MiB/s [2024-11-18T19:35:00.834Z] 8297.50 IOPS, 32.41 MiB/s [2024-11-18T19:35:00.834Z] 8329.43 IOPS, 32.54 MiB/s [2024-11-18T19:35:00.834Z] 8365.75 IOPS, 32.68 MiB/s [2024-11-18T19:35:00.834Z] 8385.56 IOPS, 32.76 MiB/s [2024-11-18T19:35:00.834Z] 8379.90 IOPS, 32.73 MiB/s [2024-11-18T19:35:00.834Z] 8367.45 IOPS, 32.69 MiB/s [2024-11-18T19:35:00.834Z] 8365.75 IOPS, 32.68 MiB/s [2024-11-18T19:35:00.834Z] 8357.54 IOPS, 32.65 MiB/s [2024-11-18T19:35:00.834Z] 8339.36 IOPS, 32.58 MiB/s [2024-11-18T19:35:00.834Z] 8329.60 IOPS, 32.54 MiB/s [2024-11-18T19:35:00.834Z] [2024-11-18 20:34:41.116668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.826 [2024-11-18 20:34:41.116723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.116760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.116779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.116803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.116819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.116841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.116857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.116879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.116895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.116932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.116948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.116969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.116985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:48.826 [2024-11-18 20:34:41.117969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.826 [2024-11-18 20:34:41.117984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.118005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.118024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.118886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.118911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.118939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.118972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.118995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.119956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.119986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.120018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.120046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.120063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.120084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.120099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.120132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.120148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.120169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.120185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.120206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.120222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.120243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.120258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.120288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.120306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.120329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.120345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.120789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.120813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.120840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.120857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.120879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.120896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.120918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.120934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.120970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.120995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.121018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.121036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.121074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.121090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.121111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.121126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.121148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.121164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.121185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.121200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.121236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.121252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.121273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.121288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.121309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.121324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.121345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.121385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:48.827 [2024-11-18 20:34:41.121411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.827 [2024-11-18 20:34:41.121427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.121448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.121464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.121493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.121518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.121541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.121557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.121578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.121593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.121614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.121630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.121679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.121707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.121731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.121747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.121769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.828 [2024-11-18 20:34:41.121784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.121806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.828 [2024-11-18 20:34:41.121822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.121843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.121859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.121881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.121897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.121918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.121934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.121956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.121971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.122950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.122987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.123004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.123026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.123041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.123062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.123078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.123099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.123118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.123140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.123156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.123177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.123193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.123227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.123246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.123991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.124016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.124044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.124062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.124084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.124100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.124137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.124155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.124177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.124193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.124215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.828 [2024-11-18 20:34:41.124231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:48.828 [2024-11-18 20:34:41.124282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.828 [2024-11-18 20:34:41.124300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.124968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.124996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.125981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.125996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.126018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.126037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.126059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.126075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.126096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.126112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.126133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.829 [2024-11-18 20:34:41.126163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:48.829 [2024-11-18 20:34:41.126184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.126965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.126980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.127788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.127812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.127858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.127880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.127904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.127936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.127958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.128868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.830 [2024-11-18 20:34:41.128934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.128957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.830 [2024-11-18 20:34:41.128994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.129017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.129032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.129053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.129068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.129089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.129104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.129124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.129139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.129160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.129176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.129196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.830 [2024-11-18 20:34:41.129221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:48.830 [2024-11-18 20:34:41.129244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.129935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.129957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.140556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.140605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.140624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.140670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.140688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.140710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.140726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.140747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.140762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.140783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.140798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.140820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.140835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.140857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.140872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.140894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.140909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.141678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.141703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.141731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.141749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.141784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.141802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.141824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.141840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.141867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.141884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.141921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.141938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.141960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.831 [2024-11-18 20:34:41.141976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.141998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.831 [2024-11-18 20:34:41.142687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:48.831 [2024-11-18 20:34:41.142708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.142724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.142746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.142761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.142782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.142798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.142831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.142848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.142870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.142891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.142913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.142929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.142950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.142966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.142988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.143970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.143985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.144505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.144520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.145360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.145383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.145450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.145470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.145494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.145511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.145546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.145564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.145586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.145602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.145624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.145649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.145674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.832 [2024-11-18 20:34:41.145690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:48.832 [2024-11-18 20:34:41.145712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.145744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.145773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.145790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.145811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.145827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.145848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.145864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.145886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.145901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.145922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.145953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.145976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.145991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.833 [2024-11-18 20:34:41.146485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.833 [2024-11-18 20:34:41.146521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.146976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.146992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.833 [2024-11-18 20:34:41.147720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:48.833 [2024-11-18 20:34:41.147741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.147757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.147779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.147794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.147816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.147832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.148599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.148623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.148660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.148679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.148702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.148732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.148759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.148776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.148797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.148813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.148835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.148851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.148872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.148888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.148910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.834 [2024-11-18 20:34:41.148926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.148963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.148979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.149977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.149997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:48.834 [2024-11-18 20:34:41.150778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.834 [2024-11-18 20:34:41.150793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.150815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.150831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.150852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.150868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.150889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.150905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.150942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.150957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.150979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.151030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.151064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.151099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.151134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.151172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.151207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.151242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.151276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.151312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.151346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.151381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.151415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.151450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.151485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.151500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.152403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.152427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.152455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.152472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.152494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.152518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.152541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.152557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.152579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.152596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.152618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.152657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.152685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.152702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.152724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.152741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.152763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.152778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.152800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.152816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.152838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.152854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.152875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.152891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.152913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.152943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.152976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.835 [2024-11-18 20:34:41.153438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.835 [2024-11-18 20:34:41.153475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:48.835 [2024-11-18 20:34:41.153870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.835 [2024-11-18 20:34:41.153886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.153919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.153936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.153973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.153989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.154786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.154804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.155522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.155563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.155594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.155612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.155634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.155660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.155683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.155700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.155722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.155738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.155761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.155777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.155799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.155815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.155837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.155858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.155881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.836 [2024-11-18 20:34:41.155897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.155935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.155951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.155972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.836 [2024-11-18 20:34:41.156647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:48.836 [2024-11-18 20:34:41.156679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.156697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.156731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.156748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.156769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.156785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.156807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.156823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.156844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.156860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.156886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.156903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.156940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.156956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.156978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.156993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.157974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.157994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.158008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.158030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.158045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.158065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.158080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.158100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.158115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.158135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.158149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.158169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.158184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.158204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.158219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.158239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.158253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.158273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.158288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.158308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.158322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.158342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.158357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.158381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.158397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.158417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.158432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.159306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.159330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.159357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.159374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.159397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.159413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.159434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.159450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.159472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.159488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.159509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.159525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.159547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.159562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.159618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.159657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.159698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.159717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.159739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.837 [2024-11-18 20:34:41.159754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:48.837 [2024-11-18 20:34:41.159781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.159797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.159819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.159835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.159857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.159873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.159895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.159922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.159962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.159978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.159999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.838 [2024-11-18 20:34:41.160437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.838 [2024-11-18 20:34:41.160473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.160976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.160991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.161703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.161723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.162464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.162488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.162515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.162532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.162569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.162588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.162619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.162645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.162670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.162687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.162709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.162725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.162747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.162763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.162786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.162802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:48.838 [2024-11-18 20:34:41.162823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.838 [2024-11-18 20:34:41.162839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.162861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.839 [2024-11-18 20:34:41.162877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.162900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.162931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.162953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.162968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.162989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.163955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.163980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:48.839 [2024-11-18 20:34:41.164832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.839 [2024-11-18 20:34:41.164848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.164870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.164886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.164908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.164940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.164962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.164976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.164997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.165012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.165033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.165049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.165069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.165085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.165106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.165121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.165142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.165161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.165182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.165197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.165217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.165232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.165252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.165267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.165287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.165303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.165323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.165338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.165359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.165374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.166942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.166982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.840 [2024-11-18 20:34:41.167396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.840 [2024-11-18 20:34:41.167433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.840 [2024-11-18 20:34:41.167956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:48.840 [2024-11-18 20:34:41.167978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.167993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.168593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.168609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.169382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.169426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.169480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.169521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.169558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.169596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.169657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.169712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.169750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.169788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.841 [2024-11-18 20:34:41.169825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.169869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.169922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.169960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.169996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.170969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.170986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.171008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.841 [2024-11-18 20:34:41.171023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:48.841 [2024-11-18 20:34:41.171044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.171897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.171941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.178238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.178281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.178300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.178321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.178336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.178374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.178390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.178411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.178427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.178449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.178464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.178485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.178500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.178521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.178536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.178557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.178573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.178593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.178609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.178630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.178657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.178966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.178989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.179073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.179110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.179141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.179158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.179184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.179200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.179227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.179243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.179269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.179285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.179327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.179343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.179382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.179398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.179422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.179437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.179462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.179477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.179502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.179517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.179541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.179557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.179582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.179597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.179644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.179667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:48.842 [2024-11-18 20:34:41.179695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.842 [2024-11-18 20:34:41.179711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.179737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.179752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.179777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.179793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.179818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.179834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.179859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.179874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.179900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.179930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.179957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.179973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.179997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.843 [2024-11-18 20:34:41.180253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.843 [2024-11-18 20:34:41.180292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.180965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.180980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.181004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.181019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.181044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.181059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.181083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.181098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.181122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.181137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.181161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.181176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.181205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.181220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.181245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.181260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.181284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.181299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.181323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.181338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.181362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.181377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.181402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.181417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:41.181560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:41.181580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:48.843 7834.88 IOPS, 30.60 MiB/s [2024-11-18T19:35:00.851Z] 7374.00 IOPS, 28.80 MiB/s [2024-11-18T19:35:00.851Z] 6964.33 IOPS, 27.20 MiB/s [2024-11-18T19:35:00.851Z] 6597.79 IOPS, 25.77 MiB/s [2024-11-18T19:35:00.851Z] 6647.95 IOPS, 25.97 MiB/s [2024-11-18T19:35:00.851Z] 6742.76 IOPS, 26.34 MiB/s [2024-11-18T19:35:00.851Z] 6845.95 IOPS, 26.74 MiB/s [2024-11-18T19:35:00.851Z] 7025.39 IOPS, 27.44 MiB/s [2024-11-18T19:35:00.851Z] 7205.79 IOPS, 28.15 MiB/s [2024-11-18T19:35:00.851Z] 7352.96 IOPS, 28.72 MiB/s [2024-11-18T19:35:00.851Z] 7401.69 IOPS, 28.91 MiB/s [2024-11-18T19:35:00.851Z] 7446.89 IOPS, 29.09 MiB/s [2024-11-18T19:35:00.851Z] 7484.96 IOPS, 29.24 MiB/s [2024-11-18T19:35:00.851Z] 7567.52 IOPS, 29.56 MiB/s [2024-11-18T19:35:00.851Z] 7681.40 IOPS, 30.01 MiB/s [2024-11-18T19:35:00.851Z] 7787.03 IOPS, 30.42 MiB/s [2024-11-18T19:35:00.851Z] [2024-11-18 20:34:57.729605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:57.729682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:57.729756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:57.729778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:57.729803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:57.729820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:57.729843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:57.729859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:57.729892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:57.729909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:57.729932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:57.729948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:57.729985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:57.730002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:57.730024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.843 [2024-11-18 20:34:57.730040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:57.730062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.843 [2024-11-18 20:34:57.730077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:48.843 [2024-11-18 20:34:57.730098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.843 [2024-11-18 20:34:57.730114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.730150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.730187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.730224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.730261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.730298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.730335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.730377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.730414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.730450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.730486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.730524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.730560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.730598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.730662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.730702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.730741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.730763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.730779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.732587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.732628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.732668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.732691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.732716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.732732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.732755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.732772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.732794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.732810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.732832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.732849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.732871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.732887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.732909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.732925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.732947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.732964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.733229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.844 [2024-11-18 20:34:57.733276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.733974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.733997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.734012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.734033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.734049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.734070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.734085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.734106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.734121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.734142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.734157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.734178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.734193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.734255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.734275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.734298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.734314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.734336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.734353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:48.844 [2024-11-18 20:34:57.734375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.844 [2024-11-18 20:34:57.734391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:48.844 7847.22 IOPS, 30.65 MiB/s [2024-11-18T19:35:00.852Z] 7854.21 IOPS, 30.68 MiB/s [2024-11-18T19:35:00.852Z] 7864.85 IOPS, 30.72 MiB/s [2024-11-18T19:35:00.852Z] Received shutdown signal, test time was about 34.429720 seconds 00:33:48.844 00:33:48.844 Latency(us) 00:33:48.844 [2024-11-18T19:35:00.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.844 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:48.844 Verification LBA range: start 0x0 length 0x4000 00:33:48.845 Nvme0n1 : 34.43 7852.79 30.67 0.00 0.00 16250.15 476.35 4076242.11 00:33:48.845 [2024-11-18T19:35:00.853Z] =================================================================================================================== 00:33:48.845 [2024-11-18T19:35:00.853Z] Total : 7852.79 30.67 0.00 0.00 16250.15 476.35 4076242.11 00:33:48.845 20:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:49.104 rmmod nvme_tcp 00:33:49.104 rmmod nvme_fabrics 00:33:49.104 rmmod nvme_keyring 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 370111 ']' 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 370111 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 370111 ']' 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 370111 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:49.104 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 370111 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 370111' 00:33:49.363 killing process with pid 370111 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 370111 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 370111 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.363 20:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.902 20:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:51.902 00:33:51.902 real 0m43.523s 00:33:51.902 user 2m10.779s 00:33:51.902 sys 0m11.606s 00:33:51.902 20:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:51.902 20:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:51.902 ************************************ 00:33:51.902 END TEST nvmf_host_multipath_status 00:33:51.902 ************************************ 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.903 ************************************ 00:33:51.903 START TEST nvmf_discovery_remove_ifc 00:33:51.903 ************************************ 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:51.903 * Looking for test storage... 00:33:51.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:51.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.903 --rc genhtml_branch_coverage=1 00:33:51.903 --rc genhtml_function_coverage=1 00:33:51.903 --rc genhtml_legend=1 00:33:51.903 --rc geninfo_all_blocks=1 00:33:51.903 --rc geninfo_unexecuted_blocks=1 00:33:51.903 00:33:51.903 ' 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:51.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.903 --rc genhtml_branch_coverage=1 00:33:51.903 --rc genhtml_function_coverage=1 00:33:51.903 --rc genhtml_legend=1 00:33:51.903 --rc geninfo_all_blocks=1 00:33:51.903 --rc geninfo_unexecuted_blocks=1 00:33:51.903 00:33:51.903 ' 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:51.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.903 --rc genhtml_branch_coverage=1 00:33:51.903 --rc genhtml_function_coverage=1 00:33:51.903 --rc genhtml_legend=1 00:33:51.903 --rc geninfo_all_blocks=1 00:33:51.903 --rc geninfo_unexecuted_blocks=1 00:33:51.903 00:33:51.903 ' 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:51.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.903 --rc genhtml_branch_coverage=1 00:33:51.903 --rc genhtml_function_coverage=1 00:33:51.903 --rc genhtml_legend=1 00:33:51.903 --rc geninfo_all_blocks=1 00:33:51.903 --rc geninfo_unexecuted_blocks=1 00:33:51.903 00:33:51.903 ' 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:51.903 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:51.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:51.904 20:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:53.809 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:53.809 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.809 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:53.810 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:53.810 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:53.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:53.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:33:53.810 00:33:53.810 --- 10.0.0.2 ping statistics --- 00:33:53.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.810 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:53.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:53.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:33:53.810 00:33:53.810 --- 10.0.0.1 ping statistics --- 00:33:53.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.810 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=376851 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 376851 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 376851 ']' 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:53.810 20:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.069 [2024-11-18 20:35:05.824899] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:33:54.069 [2024-11-18 20:35:05.825013] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.069 [2024-11-18 20:35:05.896167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.069 [2024-11-18 20:35:05.942187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.069 [2024-11-18 20:35:05.942238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.069 [2024-11-18 20:35:05.942266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.069 [2024-11-18 20:35:05.942277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.069 [2024-11-18 20:35:05.942287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.069 [2024-11-18 20:35:05.942909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.069 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:54.069 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:54.069 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:54.069 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:54.069 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.327 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.327 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:54.327 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.327 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.327 [2024-11-18 20:35:06.092709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.327 [2024-11-18 20:35:06.100903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:54.327 null0 00:33:54.327 [2024-11-18 20:35:06.132843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.327 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.327 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=376877 00:33:54.327 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:54.327 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 376877 /tmp/host.sock 00:33:54.327 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 376877 ']' 00:33:54.327 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:54.328 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:54.328 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:54.328 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:54.328 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:54.328 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.328 [2024-11-18 20:35:06.196546] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:33:54.328 [2024-11-18 20:35:06.196624] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376877 ] 00:33:54.328 [2024-11-18 20:35:06.261091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.328 [2024-11-18 20:35:06.306456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.587 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:54.587 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:54.587 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:54.587 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:54.587 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.587 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.587 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.587 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:54.587 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.587 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.587 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.587 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:54.587 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.587 20:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.538 [2024-11-18 20:35:07.530074] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:55.538 [2024-11-18 20:35:07.530101] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:55.538 [2024-11-18 20:35:07.530129] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:55.796 [2024-11-18 20:35:07.616431] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:55.796 [2024-11-18 20:35:07.711262] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:55.796 [2024-11-18 20:35:07.712157] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1002c00:1 started. 00:33:55.796 [2024-11-18 20:35:07.713938] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:55.796 [2024-11-18 20:35:07.714016] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:55.796 [2024-11-18 20:35:07.714053] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:55.797 [2024-11-18 20:35:07.714075] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:55.797 [2024-11-18 20:35:07.714108] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:55.797 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.797 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:55.797 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:55.797 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:55.797 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:55.797 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.797 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:55.797 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.797 [2024-11-18 20:35:07.718600] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1002c00 was disconnected and freed. delete nvme_qpair. 00:33:55.797 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:55.797 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.797 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:55.797 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:55.797 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:56.057 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:56.057 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:56.057 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:56.057 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:56.057 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.057 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:56.057 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:56.057 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:56.057 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.057 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:56.057 20:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:56.997 20:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:56.997 20:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:56.997 20:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.997 20:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:56.997 20:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:56.997 20:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:56.997 20:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:56.997 20:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.997 20:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:56.997 20:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:57.938 20:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:57.938 20:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:57.938 20:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:57.938 20:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.938 20:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:57.938 20:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:57.938 20:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:57.938 20:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.938 20:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:57.938 20:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:59.329 20:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:59.329 20:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:59.329 20:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:59.329 20:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:59.329 20:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.329 20:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:59.329 20:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:59.329 20:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.329 20:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:59.329 20:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:00.269 20:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:00.269 20:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:00.269 20:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.269 20:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:00.269 20:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:00.269 20:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:00.269 20:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:00.269 20:35:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.269 20:35:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:00.269 20:35:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:01.211 20:35:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:01.211 20:35:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:01.211 20:35:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:01.211 20:35:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.211 20:35:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:01.211 20:35:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:01.211 20:35:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:01.211 20:35:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.211 20:35:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:01.211 20:35:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:01.211 [2024-11-18 20:35:13.155451] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:01.211 [2024-11-18 20:35:13.155531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:01.211 [2024-11-18 20:35:13.155555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.211 [2024-11-18 20:35:13.155574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:01.211 [2024-11-18 20:35:13.155588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.211 [2024-11-18 20:35:13.155601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:01.211 [2024-11-18 20:35:13.155614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.211 [2024-11-18 20:35:13.155628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:01.211 [2024-11-18 20:35:13.155648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.211 [2024-11-18 20:35:13.155662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:01.211 [2024-11-18 20:35:13.155675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.211 [2024-11-18 20:35:13.155688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdf400 is same with the state(6) to be set 00:34:01.211 [2024-11-18 20:35:13.165470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfdf400 (9): Bad file descriptor 00:34:01.211 [2024-11-18 20:35:13.175517] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:01.211 [2024-11-18 20:35:13.175539] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:01.211 [2024-11-18 20:35:13.175549] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:01.211 [2024-11-18 20:35:13.175558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:01.211 [2024-11-18 20:35:13.175616] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:02.150 20:35:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:02.150 20:35:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:02.150 20:35:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:02.150 20:35:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.150 20:35:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:02.150 20:35:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:02.150 20:35:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:02.408 [2024-11-18 20:35:14.201671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:02.408 [2024-11-18 20:35:14.201732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfdf400 with addr=10.0.0.2, port=4420 00:34:02.408 [2024-11-18 20:35:14.201762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdf400 is same with the state(6) to be set 00:34:02.408 [2024-11-18 20:35:14.201797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfdf400 (9): Bad file descriptor 00:34:02.408 [2024-11-18 20:35:14.202215] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:34:02.408 [2024-11-18 20:35:14.202255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:02.408 [2024-11-18 20:35:14.202272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:02.408 [2024-11-18 20:35:14.202288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:02.408 [2024-11-18 20:35:14.202300] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:02.408 [2024-11-18 20:35:14.202312] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:02.408 [2024-11-18 20:35:14.202320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:02.408 [2024-11-18 20:35:14.202333] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:02.408 [2024-11-18 20:35:14.202342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:02.408 20:35:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.408 20:35:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:02.408 20:35:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:03.343 [2024-11-18 20:35:15.204834] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:03.343 [2024-11-18 20:35:15.204865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:03.343 [2024-11-18 20:35:15.204886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:03.343 [2024-11-18 20:35:15.204915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:03.343 [2024-11-18 20:35:15.204928] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:34:03.343 [2024-11-18 20:35:15.204942] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:03.343 [2024-11-18 20:35:15.204952] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:03.343 [2024-11-18 20:35:15.204974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:03.343 [2024-11-18 20:35:15.205021] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:03.343 [2024-11-18 20:35:15.205077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.343 [2024-11-18 20:35:15.205100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.343 [2024-11-18 20:35:15.205120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.343 [2024-11-18 20:35:15.205133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.343 [2024-11-18 20:35:15.205146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.343 [2024-11-18 20:35:15.205164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.343 [2024-11-18 20:35:15.205177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.344 [2024-11-18 20:35:15.205189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.344 [2024-11-18 20:35:15.205203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.344 [2024-11-18 20:35:15.205215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.344 [2024-11-18 20:35:15.205228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:34:03.344 [2024-11-18 20:35:15.205281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfceb40 (9): Bad file descriptor 00:34:03.344 [2024-11-18 20:35:15.206273] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:03.344 [2024-11-18 20:35:15.206294] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:03.344 20:35:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:04.725 20:35:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:04.725 20:35:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:04.725 20:35:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:04.725 20:35:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.725 20:35:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:04.726 20:35:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:04.726 20:35:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:04.726 20:35:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.726 20:35:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:04.726 20:35:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:05.297 [2024-11-18 20:35:17.256307] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:05.297 [2024-11-18 20:35:17.256338] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:05.297 [2024-11-18 20:35:17.256360] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:05.558 [2024-11-18 20:35:17.384791] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:05.558 20:35:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:05.558 20:35:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:05.558 20:35:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.558 20:35:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:05.558 20:35:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.558 20:35:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:05.558 20:35:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:05.558 20:35:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.558 20:35:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:05.558 20:35:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:05.558 [2024-11-18 20:35:17.444379] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:34:05.558 [2024-11-18 20:35:17.445220] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xfe1720:1 started. 00:34:05.558 [2024-11-18 20:35:17.446562] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:05.558 [2024-11-18 20:35:17.446609] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:05.558 [2024-11-18 20:35:17.446664] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:05.558 [2024-11-18 20:35:17.446688] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:05.558 [2024-11-18 20:35:17.446713] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:05.558 [2024-11-18 20:35:17.454559] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xfe1720 was disconnected and freed. delete nvme_qpair. 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 376877 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 376877 ']' 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 376877 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:06.498 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376877 00:34:06.757 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:06.757 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:06.757 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376877' 00:34:06.757 killing process with pid 376877 00:34:06.757 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 376877 00:34:06.757 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 376877 00:34:06.757 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:06.757 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:06.757 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:06.757 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:06.757 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:06.757 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:06.757 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:06.757 rmmod nvme_tcp 00:34:06.757 rmmod nvme_fabrics 00:34:06.757 rmmod nvme_keyring 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 376851 ']' 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 376851 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 376851 ']' 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 376851 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376851 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376851' 00:34:07.016 killing process with pid 376851 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 376851 00:34:07.016 20:35:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 376851 00:34:07.016 20:35:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:07.016 20:35:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:07.016 20:35:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:07.016 20:35:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:07.016 20:35:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:34:07.016 20:35:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:07.016 20:35:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:34:07.277 20:35:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:07.277 20:35:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:07.277 20:35:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.277 20:35:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.277 20:35:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.181 20:35:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:09.181 00:34:09.181 real 0m17.617s 00:34:09.181 user 0m25.590s 00:34:09.181 sys 0m2.971s 00:34:09.181 20:35:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:09.181 20:35:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:09.181 ************************************ 00:34:09.181 END TEST nvmf_discovery_remove_ifc 00:34:09.181 ************************************ 00:34:09.181 20:35:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:09.181 20:35:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:09.181 20:35:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:09.181 20:35:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.181 ************************************ 00:34:09.181 START TEST nvmf_identify_kernel_target 00:34:09.181 ************************************ 00:34:09.181 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:09.181 * Looking for test storage... 00:34:09.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:09.181 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:09.181 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:09.181 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:09.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.440 --rc genhtml_branch_coverage=1 00:34:09.440 --rc genhtml_function_coverage=1 00:34:09.440 --rc genhtml_legend=1 00:34:09.440 --rc geninfo_all_blocks=1 00:34:09.440 --rc geninfo_unexecuted_blocks=1 00:34:09.440 00:34:09.440 ' 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:09.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.440 --rc genhtml_branch_coverage=1 00:34:09.440 --rc genhtml_function_coverage=1 00:34:09.440 --rc genhtml_legend=1 00:34:09.440 --rc geninfo_all_blocks=1 00:34:09.440 --rc geninfo_unexecuted_blocks=1 00:34:09.440 00:34:09.440 ' 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:09.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.440 --rc genhtml_branch_coverage=1 00:34:09.440 --rc genhtml_function_coverage=1 00:34:09.440 --rc genhtml_legend=1 00:34:09.440 --rc geninfo_all_blocks=1 00:34:09.440 --rc geninfo_unexecuted_blocks=1 00:34:09.440 00:34:09.440 ' 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:09.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.440 --rc genhtml_branch_coverage=1 00:34:09.440 --rc genhtml_function_coverage=1 00:34:09.440 --rc genhtml_legend=1 00:34:09.440 --rc geninfo_all_blocks=1 00:34:09.440 --rc geninfo_unexecuted_blocks=1 00:34:09.440 00:34:09.440 ' 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:09.440 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:09.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:09.441 20:35:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:11.982 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:11.982 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:11.982 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:11.982 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:11.982 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:11.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:34:11.982 00:34:11.983 --- 10.0.0.2 ping statistics --- 00:34:11.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.983 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:11.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:34:11.983 00:34:11.983 --- 10.0.0.1 ping statistics --- 00:34:11.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.983 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:11.983 20:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:12.920 Waiting for block devices as requested 00:34:12.920 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:13.178 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:13.178 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:13.437 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:13.437 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:13.437 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:13.437 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:13.695 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:13.695 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:13.695 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:13.695 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:13.955 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:13.955 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:13.955 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:13.955 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:13.955 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:14.214 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:14.214 No valid GPT data, bailing 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:14.214 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:14.474 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:14.474 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:14.474 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:14.474 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:14.474 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:14.474 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:14.474 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:14.474 00:34:14.474 Discovery Log Number of Records 2, Generation counter 2 00:34:14.474 =====Discovery Log Entry 0====== 00:34:14.474 trtype: tcp 00:34:14.474 adrfam: ipv4 00:34:14.474 subtype: current discovery subsystem 00:34:14.474 treq: not specified, sq flow control disable supported 00:34:14.474 portid: 1 00:34:14.474 trsvcid: 4420 00:34:14.474 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:14.474 traddr: 10.0.0.1 00:34:14.474 eflags: none 00:34:14.474 sectype: none 00:34:14.474 =====Discovery Log Entry 1====== 00:34:14.474 trtype: tcp 00:34:14.474 adrfam: ipv4 00:34:14.474 subtype: nvme subsystem 00:34:14.474 treq: not specified, sq flow control disable supported 00:34:14.474 portid: 1 00:34:14.474 trsvcid: 4420 00:34:14.474 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:14.474 traddr: 10.0.0.1 00:34:14.474 eflags: none 00:34:14.474 sectype: none 00:34:14.474 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:14.474 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:14.474 ===================================================== 00:34:14.474 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:14.474 ===================================================== 00:34:14.474 Controller Capabilities/Features 00:34:14.474 ================================ 00:34:14.474 Vendor ID: 0000 00:34:14.474 Subsystem Vendor ID: 0000 00:34:14.474 Serial Number: 701021547e6d181a0166 00:34:14.474 Model Number: Linux 00:34:14.474 Firmware Version: 6.8.9-20 00:34:14.474 Recommended Arb Burst: 0 00:34:14.474 IEEE OUI Identifier: 00 00 00 00:34:14.474 Multi-path I/O 00:34:14.474 May have multiple subsystem ports: No 00:34:14.474 May have multiple controllers: No 00:34:14.474 Associated with SR-IOV VF: No 00:34:14.474 Max Data Transfer Size: Unlimited 00:34:14.474 Max Number of Namespaces: 0 00:34:14.474 Max Number of I/O Queues: 1024 00:34:14.474 NVMe Specification Version (VS): 1.3 00:34:14.474 NVMe Specification Version (Identify): 1.3 00:34:14.474 Maximum Queue Entries: 1024 00:34:14.474 Contiguous Queues Required: No 00:34:14.474 Arbitration Mechanisms Supported 00:34:14.474 Weighted Round Robin: Not Supported 00:34:14.474 Vendor Specific: Not Supported 00:34:14.474 Reset Timeout: 7500 ms 00:34:14.474 Doorbell Stride: 4 bytes 00:34:14.474 NVM Subsystem Reset: Not Supported 00:34:14.474 Command Sets Supported 00:34:14.474 NVM Command Set: Supported 00:34:14.474 Boot Partition: Not Supported 00:34:14.474 Memory Page Size Minimum: 4096 bytes 00:34:14.474 Memory Page Size Maximum: 4096 bytes 00:34:14.474 Persistent Memory Region: Not Supported 00:34:14.474 Optional Asynchronous Events Supported 00:34:14.475 Namespace Attribute Notices: Not Supported 00:34:14.475 Firmware Activation Notices: Not Supported 00:34:14.475 ANA Change Notices: Not Supported 00:34:14.475 PLE Aggregate Log Change Notices: Not Supported 00:34:14.475 LBA Status Info Alert Notices: Not Supported 00:34:14.475 EGE Aggregate Log Change Notices: Not Supported 00:34:14.475 Normal NVM Subsystem Shutdown event: Not Supported 00:34:14.475 Zone Descriptor Change Notices: Not Supported 00:34:14.475 Discovery Log Change Notices: Supported 00:34:14.475 Controller Attributes 00:34:14.475 128-bit Host Identifier: Not Supported 00:34:14.475 Non-Operational Permissive Mode: Not Supported 00:34:14.475 NVM Sets: Not Supported 00:34:14.475 Read Recovery Levels: Not Supported 00:34:14.475 Endurance Groups: Not Supported 00:34:14.475 Predictable Latency Mode: Not Supported 00:34:14.475 Traffic Based Keep ALive: Not Supported 00:34:14.475 Namespace Granularity: Not Supported 00:34:14.475 SQ Associations: Not Supported 00:34:14.475 UUID List: Not Supported 00:34:14.475 Multi-Domain Subsystem: Not Supported 00:34:14.475 Fixed Capacity Management: Not Supported 00:34:14.475 Variable Capacity Management: Not Supported 00:34:14.475 Delete Endurance Group: Not Supported 00:34:14.475 Delete NVM Set: Not Supported 00:34:14.475 Extended LBA Formats Supported: Not Supported 00:34:14.475 Flexible Data Placement Supported: Not Supported 00:34:14.475 00:34:14.475 Controller Memory Buffer Support 00:34:14.475 ================================ 00:34:14.475 Supported: No 00:34:14.475 00:34:14.475 Persistent Memory Region Support 00:34:14.475 ================================ 00:34:14.475 Supported: No 00:34:14.475 00:34:14.475 Admin Command Set Attributes 00:34:14.475 ============================ 00:34:14.475 Security Send/Receive: Not Supported 00:34:14.475 Format NVM: Not Supported 00:34:14.475 Firmware Activate/Download: Not Supported 00:34:14.475 Namespace Management: Not Supported 00:34:14.475 Device Self-Test: Not Supported 00:34:14.475 Directives: Not Supported 00:34:14.475 NVMe-MI: Not Supported 00:34:14.475 Virtualization Management: Not Supported 00:34:14.475 Doorbell Buffer Config: Not Supported 00:34:14.475 Get LBA Status Capability: Not Supported 00:34:14.475 Command & Feature Lockdown Capability: Not Supported 00:34:14.475 Abort Command Limit: 1 00:34:14.475 Async Event Request Limit: 1 00:34:14.475 Number of Firmware Slots: N/A 00:34:14.475 Firmware Slot 1 Read-Only: N/A 00:34:14.475 Firmware Activation Without Reset: N/A 00:34:14.475 Multiple Update Detection Support: N/A 00:34:14.475 Firmware Update Granularity: No Information Provided 00:34:14.475 Per-Namespace SMART Log: No 00:34:14.475 Asymmetric Namespace Access Log Page: Not Supported 00:34:14.475 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:14.475 Command Effects Log Page: Not Supported 00:34:14.475 Get Log Page Extended Data: Supported 00:34:14.475 Telemetry Log Pages: Not Supported 00:34:14.475 Persistent Event Log Pages: Not Supported 00:34:14.475 Supported Log Pages Log Page: May Support 00:34:14.475 Commands Supported & Effects Log Page: Not Supported 00:34:14.475 Feature Identifiers & Effects Log Page:May Support 00:34:14.475 NVMe-MI Commands & Effects Log Page: May Support 00:34:14.475 Data Area 4 for Telemetry Log: Not Supported 00:34:14.475 Error Log Page Entries Supported: 1 00:34:14.475 Keep Alive: Not Supported 00:34:14.475 00:34:14.475 NVM Command Set Attributes 00:34:14.475 ========================== 00:34:14.475 Submission Queue Entry Size 00:34:14.475 Max: 1 00:34:14.475 Min: 1 00:34:14.475 Completion Queue Entry Size 00:34:14.475 Max: 1 00:34:14.475 Min: 1 00:34:14.475 Number of Namespaces: 0 00:34:14.475 Compare Command: Not Supported 00:34:14.475 Write Uncorrectable Command: Not Supported 00:34:14.475 Dataset Management Command: Not Supported 00:34:14.475 Write Zeroes Command: Not Supported 00:34:14.475 Set Features Save Field: Not Supported 00:34:14.475 Reservations: Not Supported 00:34:14.475 Timestamp: Not Supported 00:34:14.475 Copy: Not Supported 00:34:14.475 Volatile Write Cache: Not Present 00:34:14.475 Atomic Write Unit (Normal): 1 00:34:14.475 Atomic Write Unit (PFail): 1 00:34:14.475 Atomic Compare & Write Unit: 1 00:34:14.475 Fused Compare & Write: Not Supported 00:34:14.475 Scatter-Gather List 00:34:14.475 SGL Command Set: Supported 00:34:14.475 SGL Keyed: Not Supported 00:34:14.475 SGL Bit Bucket Descriptor: Not Supported 00:34:14.475 SGL Metadata Pointer: Not Supported 00:34:14.475 Oversized SGL: Not Supported 00:34:14.475 SGL Metadata Address: Not Supported 00:34:14.475 SGL Offset: Supported 00:34:14.475 Transport SGL Data Block: Not Supported 00:34:14.475 Replay Protected Memory Block: Not Supported 00:34:14.475 00:34:14.475 Firmware Slot Information 00:34:14.475 ========================= 00:34:14.475 Active slot: 0 00:34:14.475 00:34:14.475 00:34:14.475 Error Log 00:34:14.475 ========= 00:34:14.475 00:34:14.475 Active Namespaces 00:34:14.475 ================= 00:34:14.475 Discovery Log Page 00:34:14.475 ================== 00:34:14.475 Generation Counter: 2 00:34:14.475 Number of Records: 2 00:34:14.475 Record Format: 0 00:34:14.475 00:34:14.475 Discovery Log Entry 0 00:34:14.475 ---------------------- 00:34:14.475 Transport Type: 3 (TCP) 00:34:14.475 Address Family: 1 (IPv4) 00:34:14.475 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:14.475 Entry Flags: 00:34:14.475 Duplicate Returned Information: 0 00:34:14.475 Explicit Persistent Connection Support for Discovery: 0 00:34:14.475 Transport Requirements: 00:34:14.475 Secure Channel: Not Specified 00:34:14.475 Port ID: 1 (0x0001) 00:34:14.475 Controller ID: 65535 (0xffff) 00:34:14.475 Admin Max SQ Size: 32 00:34:14.475 Transport Service Identifier: 4420 00:34:14.475 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:14.475 Transport Address: 10.0.0.1 00:34:14.475 Discovery Log Entry 1 00:34:14.475 ---------------------- 00:34:14.475 Transport Type: 3 (TCP) 00:34:14.475 Address Family: 1 (IPv4) 00:34:14.475 Subsystem Type: 2 (NVM Subsystem) 00:34:14.475 Entry Flags: 00:34:14.475 Duplicate Returned Information: 0 00:34:14.475 Explicit Persistent Connection Support for Discovery: 0 00:34:14.475 Transport Requirements: 00:34:14.475 Secure Channel: Not Specified 00:34:14.475 Port ID: 1 (0x0001) 00:34:14.475 Controller ID: 65535 (0xffff) 00:34:14.475 Admin Max SQ Size: 32 00:34:14.475 Transport Service Identifier: 4420 00:34:14.475 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:14.475 Transport Address: 10.0.0.1 00:34:14.475 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:14.737 get_feature(0x01) failed 00:34:14.737 get_feature(0x02) failed 00:34:14.737 get_feature(0x04) failed 00:34:14.737 ===================================================== 00:34:14.737 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:14.737 ===================================================== 00:34:14.737 Controller Capabilities/Features 00:34:14.737 ================================ 00:34:14.737 Vendor ID: 0000 00:34:14.737 Subsystem Vendor ID: 0000 00:34:14.737 Serial Number: 2a1efa90232dbb9d483d 00:34:14.737 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:14.737 Firmware Version: 6.8.9-20 00:34:14.737 Recommended Arb Burst: 6 00:34:14.737 IEEE OUI Identifier: 00 00 00 00:34:14.737 Multi-path I/O 00:34:14.737 May have multiple subsystem ports: Yes 00:34:14.737 May have multiple controllers: Yes 00:34:14.737 Associated with SR-IOV VF: No 00:34:14.737 Max Data Transfer Size: Unlimited 00:34:14.737 Max Number of Namespaces: 1024 00:34:14.737 Max Number of I/O Queues: 128 00:34:14.737 NVMe Specification Version (VS): 1.3 00:34:14.737 NVMe Specification Version (Identify): 1.3 00:34:14.737 Maximum Queue Entries: 1024 00:34:14.737 Contiguous Queues Required: No 00:34:14.737 Arbitration Mechanisms Supported 00:34:14.737 Weighted Round Robin: Not Supported 00:34:14.737 Vendor Specific: Not Supported 00:34:14.737 Reset Timeout: 7500 ms 00:34:14.737 Doorbell Stride: 4 bytes 00:34:14.737 NVM Subsystem Reset: Not Supported 00:34:14.737 Command Sets Supported 00:34:14.737 NVM Command Set: Supported 00:34:14.737 Boot Partition: Not Supported 00:34:14.737 Memory Page Size Minimum: 4096 bytes 00:34:14.737 Memory Page Size Maximum: 4096 bytes 00:34:14.737 Persistent Memory Region: Not Supported 00:34:14.737 Optional Asynchronous Events Supported 00:34:14.737 Namespace Attribute Notices: Supported 00:34:14.737 Firmware Activation Notices: Not Supported 00:34:14.737 ANA Change Notices: Supported 00:34:14.737 PLE Aggregate Log Change Notices: Not Supported 00:34:14.737 LBA Status Info Alert Notices: Not Supported 00:34:14.737 EGE Aggregate Log Change Notices: Not Supported 00:34:14.737 Normal NVM Subsystem Shutdown event: Not Supported 00:34:14.737 Zone Descriptor Change Notices: Not Supported 00:34:14.737 Discovery Log Change Notices: Not Supported 00:34:14.737 Controller Attributes 00:34:14.737 128-bit Host Identifier: Supported 00:34:14.737 Non-Operational Permissive Mode: Not Supported 00:34:14.737 NVM Sets: Not Supported 00:34:14.737 Read Recovery Levels: Not Supported 00:34:14.737 Endurance Groups: Not Supported 00:34:14.737 Predictable Latency Mode: Not Supported 00:34:14.737 Traffic Based Keep ALive: Supported 00:34:14.737 Namespace Granularity: Not Supported 00:34:14.737 SQ Associations: Not Supported 00:34:14.737 UUID List: Not Supported 00:34:14.737 Multi-Domain Subsystem: Not Supported 00:34:14.737 Fixed Capacity Management: Not Supported 00:34:14.737 Variable Capacity Management: Not Supported 00:34:14.737 Delete Endurance Group: Not Supported 00:34:14.737 Delete NVM Set: Not Supported 00:34:14.737 Extended LBA Formats Supported: Not Supported 00:34:14.737 Flexible Data Placement Supported: Not Supported 00:34:14.737 00:34:14.737 Controller Memory Buffer Support 00:34:14.737 ================================ 00:34:14.737 Supported: No 00:34:14.737 00:34:14.737 Persistent Memory Region Support 00:34:14.737 ================================ 00:34:14.737 Supported: No 00:34:14.737 00:34:14.737 Admin Command Set Attributes 00:34:14.737 ============================ 00:34:14.737 Security Send/Receive: Not Supported 00:34:14.737 Format NVM: Not Supported 00:34:14.737 Firmware Activate/Download: Not Supported 00:34:14.737 Namespace Management: Not Supported 00:34:14.737 Device Self-Test: Not Supported 00:34:14.737 Directives: Not Supported 00:34:14.737 NVMe-MI: Not Supported 00:34:14.737 Virtualization Management: Not Supported 00:34:14.737 Doorbell Buffer Config: Not Supported 00:34:14.737 Get LBA Status Capability: Not Supported 00:34:14.737 Command & Feature Lockdown Capability: Not Supported 00:34:14.737 Abort Command Limit: 4 00:34:14.737 Async Event Request Limit: 4 00:34:14.737 Number of Firmware Slots: N/A 00:34:14.737 Firmware Slot 1 Read-Only: N/A 00:34:14.737 Firmware Activation Without Reset: N/A 00:34:14.737 Multiple Update Detection Support: N/A 00:34:14.737 Firmware Update Granularity: No Information Provided 00:34:14.737 Per-Namespace SMART Log: Yes 00:34:14.737 Asymmetric Namespace Access Log Page: Supported 00:34:14.738 ANA Transition Time : 10 sec 00:34:14.738 00:34:14.738 Asymmetric Namespace Access Capabilities 00:34:14.738 ANA Optimized State : Supported 00:34:14.738 ANA Non-Optimized State : Supported 00:34:14.738 ANA Inaccessible State : Supported 00:34:14.738 ANA Persistent Loss State : Supported 00:34:14.738 ANA Change State : Supported 00:34:14.738 ANAGRPID is not changed : No 00:34:14.738 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:14.738 00:34:14.738 ANA Group Identifier Maximum : 128 00:34:14.738 Number of ANA Group Identifiers : 128 00:34:14.738 Max Number of Allowed Namespaces : 1024 00:34:14.738 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:14.738 Command Effects Log Page: Supported 00:34:14.738 Get Log Page Extended Data: Supported 00:34:14.738 Telemetry Log Pages: Not Supported 00:34:14.738 Persistent Event Log Pages: Not Supported 00:34:14.738 Supported Log Pages Log Page: May Support 00:34:14.738 Commands Supported & Effects Log Page: Not Supported 00:34:14.738 Feature Identifiers & Effects Log Page:May Support 00:34:14.738 NVMe-MI Commands & Effects Log Page: May Support 00:34:14.738 Data Area 4 for Telemetry Log: Not Supported 00:34:14.738 Error Log Page Entries Supported: 128 00:34:14.738 Keep Alive: Supported 00:34:14.738 Keep Alive Granularity: 1000 ms 00:34:14.738 00:34:14.738 NVM Command Set Attributes 00:34:14.738 ========================== 00:34:14.738 Submission Queue Entry Size 00:34:14.738 Max: 64 00:34:14.738 Min: 64 00:34:14.738 Completion Queue Entry Size 00:34:14.738 Max: 16 00:34:14.738 Min: 16 00:34:14.738 Number of Namespaces: 1024 00:34:14.738 Compare Command: Not Supported 00:34:14.738 Write Uncorrectable Command: Not Supported 00:34:14.738 Dataset Management Command: Supported 00:34:14.738 Write Zeroes Command: Supported 00:34:14.738 Set Features Save Field: Not Supported 00:34:14.738 Reservations: Not Supported 00:34:14.738 Timestamp: Not Supported 00:34:14.738 Copy: Not Supported 00:34:14.738 Volatile Write Cache: Present 00:34:14.738 Atomic Write Unit (Normal): 1 00:34:14.738 Atomic Write Unit (PFail): 1 00:34:14.738 Atomic Compare & Write Unit: 1 00:34:14.738 Fused Compare & Write: Not Supported 00:34:14.738 Scatter-Gather List 00:34:14.738 SGL Command Set: Supported 00:34:14.738 SGL Keyed: Not Supported 00:34:14.738 SGL Bit Bucket Descriptor: Not Supported 00:34:14.738 SGL Metadata Pointer: Not Supported 00:34:14.738 Oversized SGL: Not Supported 00:34:14.738 SGL Metadata Address: Not Supported 00:34:14.738 SGL Offset: Supported 00:34:14.738 Transport SGL Data Block: Not Supported 00:34:14.738 Replay Protected Memory Block: Not Supported 00:34:14.738 00:34:14.738 Firmware Slot Information 00:34:14.738 ========================= 00:34:14.738 Active slot: 0 00:34:14.738 00:34:14.738 Asymmetric Namespace Access 00:34:14.738 =========================== 00:34:14.738 Change Count : 0 00:34:14.738 Number of ANA Group Descriptors : 1 00:34:14.738 ANA Group Descriptor : 0 00:34:14.738 ANA Group ID : 1 00:34:14.738 Number of NSID Values : 1 00:34:14.738 Change Count : 0 00:34:14.738 ANA State : 1 00:34:14.738 Namespace Identifier : 1 00:34:14.738 00:34:14.738 Commands Supported and Effects 00:34:14.738 ============================== 00:34:14.738 Admin Commands 00:34:14.738 -------------- 00:34:14.738 Get Log Page (02h): Supported 00:34:14.738 Identify (06h): Supported 00:34:14.738 Abort (08h): Supported 00:34:14.738 Set Features (09h): Supported 00:34:14.738 Get Features (0Ah): Supported 00:34:14.738 Asynchronous Event Request (0Ch): Supported 00:34:14.738 Keep Alive (18h): Supported 00:34:14.738 I/O Commands 00:34:14.738 ------------ 00:34:14.738 Flush (00h): Supported 00:34:14.738 Write (01h): Supported LBA-Change 00:34:14.738 Read (02h): Supported 00:34:14.738 Write Zeroes (08h): Supported LBA-Change 00:34:14.738 Dataset Management (09h): Supported 00:34:14.738 00:34:14.738 Error Log 00:34:14.738 ========= 00:34:14.738 Entry: 0 00:34:14.738 Error Count: 0x3 00:34:14.738 Submission Queue Id: 0x0 00:34:14.738 Command Id: 0x5 00:34:14.738 Phase Bit: 0 00:34:14.738 Status Code: 0x2 00:34:14.738 Status Code Type: 0x0 00:34:14.738 Do Not Retry: 1 00:34:14.738 Error Location: 0x28 00:34:14.738 LBA: 0x0 00:34:14.738 Namespace: 0x0 00:34:14.738 Vendor Log Page: 0x0 00:34:14.738 ----------- 00:34:14.738 Entry: 1 00:34:14.738 Error Count: 0x2 00:34:14.738 Submission Queue Id: 0x0 00:34:14.738 Command Id: 0x5 00:34:14.738 Phase Bit: 0 00:34:14.738 Status Code: 0x2 00:34:14.738 Status Code Type: 0x0 00:34:14.738 Do Not Retry: 1 00:34:14.738 Error Location: 0x28 00:34:14.738 LBA: 0x0 00:34:14.738 Namespace: 0x0 00:34:14.738 Vendor Log Page: 0x0 00:34:14.738 ----------- 00:34:14.738 Entry: 2 00:34:14.738 Error Count: 0x1 00:34:14.738 Submission Queue Id: 0x0 00:34:14.738 Command Id: 0x4 00:34:14.738 Phase Bit: 0 00:34:14.738 Status Code: 0x2 00:34:14.738 Status Code Type: 0x0 00:34:14.738 Do Not Retry: 1 00:34:14.738 Error Location: 0x28 00:34:14.738 LBA: 0x0 00:34:14.738 Namespace: 0x0 00:34:14.738 Vendor Log Page: 0x0 00:34:14.738 00:34:14.738 Number of Queues 00:34:14.738 ================ 00:34:14.738 Number of I/O Submission Queues: 128 00:34:14.738 Number of I/O Completion Queues: 128 00:34:14.738 00:34:14.738 ZNS Specific Controller Data 00:34:14.738 ============================ 00:34:14.738 Zone Append Size Limit: 0 00:34:14.738 00:34:14.738 00:34:14.738 Active Namespaces 00:34:14.738 ================= 00:34:14.738 get_feature(0x05) failed 00:34:14.738 Namespace ID:1 00:34:14.738 Command Set Identifier: NVM (00h) 00:34:14.738 Deallocate: Supported 00:34:14.738 Deallocated/Unwritten Error: Not Supported 00:34:14.738 Deallocated Read Value: Unknown 00:34:14.738 Deallocate in Write Zeroes: Not Supported 00:34:14.738 Deallocated Guard Field: 0xFFFF 00:34:14.738 Flush: Supported 00:34:14.738 Reservation: Not Supported 00:34:14.738 Namespace Sharing Capabilities: Multiple Controllers 00:34:14.738 Size (in LBAs): 1953525168 (931GiB) 00:34:14.738 Capacity (in LBAs): 1953525168 (931GiB) 00:34:14.738 Utilization (in LBAs): 1953525168 (931GiB) 00:34:14.738 UUID: 4fd84851-aea3-41e1-8b5c-567805a5318e 00:34:14.738 Thin Provisioning: Not Supported 00:34:14.738 Per-NS Atomic Units: Yes 00:34:14.738 Atomic Boundary Size (Normal): 0 00:34:14.738 Atomic Boundary Size (PFail): 0 00:34:14.738 Atomic Boundary Offset: 0 00:34:14.738 NGUID/EUI64 Never Reused: No 00:34:14.738 ANA group ID: 1 00:34:14.738 Namespace Write Protected: No 00:34:14.738 Number of LBA Formats: 1 00:34:14.738 Current LBA Format: LBA Format #00 00:34:14.738 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:14.738 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:14.738 rmmod nvme_tcp 00:34:14.738 rmmod nvme_fabrics 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:14.738 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:14.739 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:14.739 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.739 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:14.739 20:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.649 20:35:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:16.649 20:35:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:16.649 20:35:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:16.649 20:35:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:16.909 20:35:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:16.909 20:35:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:16.909 20:35:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:16.909 20:35:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:16.909 20:35:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:16.909 20:35:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:16.909 20:35:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:18.285 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:18.285 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:18.286 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:18.286 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:18.286 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:18.286 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:18.286 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:18.286 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:18.286 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:18.286 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:18.286 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:18.286 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:18.286 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:18.286 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:18.286 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:18.286 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:19.222 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:19.222 00:34:19.222 real 0m9.987s 00:34:19.222 user 0m2.175s 00:34:19.222 sys 0m3.764s 00:34:19.222 20:35:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.222 20:35:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:19.222 ************************************ 00:34:19.222 END TEST nvmf_identify_kernel_target 00:34:19.222 ************************************ 00:34:19.222 20:35:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:19.222 20:35:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:19.222 20:35:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:19.222 20:35:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.222 ************************************ 00:34:19.222 START TEST nvmf_auth_host 00:34:19.222 ************************************ 00:34:19.222 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:19.222 * Looking for test storage... 00:34:19.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:19.222 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:19.222 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:34:19.222 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:19.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.480 --rc genhtml_branch_coverage=1 00:34:19.480 --rc genhtml_function_coverage=1 00:34:19.480 --rc genhtml_legend=1 00:34:19.480 --rc geninfo_all_blocks=1 00:34:19.480 --rc geninfo_unexecuted_blocks=1 00:34:19.480 00:34:19.480 ' 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:19.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.480 --rc genhtml_branch_coverage=1 00:34:19.480 --rc genhtml_function_coverage=1 00:34:19.480 --rc genhtml_legend=1 00:34:19.480 --rc geninfo_all_blocks=1 00:34:19.480 --rc geninfo_unexecuted_blocks=1 00:34:19.480 00:34:19.480 ' 00:34:19.480 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:19.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.481 --rc genhtml_branch_coverage=1 00:34:19.481 --rc genhtml_function_coverage=1 00:34:19.481 --rc genhtml_legend=1 00:34:19.481 --rc geninfo_all_blocks=1 00:34:19.481 --rc geninfo_unexecuted_blocks=1 00:34:19.481 00:34:19.481 ' 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:19.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.481 --rc genhtml_branch_coverage=1 00:34:19.481 --rc genhtml_function_coverage=1 00:34:19.481 --rc genhtml_legend=1 00:34:19.481 --rc geninfo_all_blocks=1 00:34:19.481 --rc geninfo_unexecuted_blocks=1 00:34:19.481 00:34:19.481 ' 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:19.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:19.481 20:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.016 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.016 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:22.016 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:22.016 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:22.016 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:22.016 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:22.016 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:22.017 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:22.017 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:22.017 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:22.017 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:22.017 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:22.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:34:22.017 00:34:22.017 --- 10.0.0.2 ping statistics --- 00:34:22.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.017 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:34:22.018 00:34:22.018 --- 10.0.0.1 ping statistics --- 00:34:22.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.018 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=384095 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 384095 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 384095 ']' 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=342e4577a27c1838f2f60159ac46f39f 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.tje 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 342e4577a27c1838f2f60159ac46f39f 0 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 342e4577a27c1838f2f60159ac46f39f 0 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=342e4577a27c1838f2f60159ac46f39f 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:22.018 20:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:22.018 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.tje 00:34:22.018 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.tje 00:34:22.018 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.tje 00:34:22.018 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:22.018 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:22.018 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:22.018 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:22.018 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:22.018 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:22.018 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=173967605ecc39cd54056357777c65bca3e16ec52301ab9548c8c8d770e83701 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.MwP 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 173967605ecc39cd54056357777c65bca3e16ec52301ab9548c8c8d770e83701 3 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 173967605ecc39cd54056357777c65bca3e16ec52301ab9548c8c8d770e83701 3 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=173967605ecc39cd54056357777c65bca3e16ec52301ab9548c8c8d770e83701 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.MwP 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.MwP 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.MwP 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:22.277 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6f81aa0db0d14024ddf9c6cbb6c831db09145d46af37a7e5 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.oko 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6f81aa0db0d14024ddf9c6cbb6c831db09145d46af37a7e5 0 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6f81aa0db0d14024ddf9c6cbb6c831db09145d46af37a7e5 0 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6f81aa0db0d14024ddf9c6cbb6c831db09145d46af37a7e5 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.oko 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.oko 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.oko 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=11173f1148a04d11ec9ef6a32621ed1aed7e048db869882d 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.pWJ 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 11173f1148a04d11ec9ef6a32621ed1aed7e048db869882d 2 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 11173f1148a04d11ec9ef6a32621ed1aed7e048db869882d 2 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=11173f1148a04d11ec9ef6a32621ed1aed7e048db869882d 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.pWJ 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.pWJ 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.pWJ 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a10340e7946b15dd042b34381abbd16f 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.20t 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a10340e7946b15dd042b34381abbd16f 1 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a10340e7946b15dd042b34381abbd16f 1 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a10340e7946b15dd042b34381abbd16f 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.20t 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.20t 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.20t 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3faa7491a29f6d9ca1d80fe4a9f5d915 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.1eu 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3faa7491a29f6d9ca1d80fe4a9f5d915 1 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3faa7491a29f6d9ca1d80fe4a9f5d915 1 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3faa7491a29f6d9ca1d80fe4a9f5d915 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.1eu 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.1eu 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.1eu 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b9079f7be86ad0498961874667354f78f6ee2c755cfb6ad7 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.HAj 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b9079f7be86ad0498961874667354f78f6ee2c755cfb6ad7 2 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b9079f7be86ad0498961874667354f78f6ee2c755cfb6ad7 2 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b9079f7be86ad0498961874667354f78f6ee2c755cfb6ad7 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:22.278 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.HAj 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.HAj 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.HAj 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7e03f1ccd8dbb55e8fdea5073cc22540 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DRb 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7e03f1ccd8dbb55e8fdea5073cc22540 0 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7e03f1ccd8dbb55e8fdea5073cc22540 0 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7e03f1ccd8dbb55e8fdea5073cc22540 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DRb 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DRb 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.DRb 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:22.536 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=db17aed3079380224c9371abed4b914bd4f4f3565f8f24af7134ca7c95a873e6 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.C1y 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key db17aed3079380224c9371abed4b914bd4f4f3565f8f24af7134ca7c95a873e6 3 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 db17aed3079380224c9371abed4b914bd4f4f3565f8f24af7134ca7c95a873e6 3 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=db17aed3079380224c9371abed4b914bd4f4f3565f8f24af7134ca7c95a873e6 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.C1y 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.C1y 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.C1y 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 384095 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 384095 ']' 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.537 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tje 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.MwP ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MwP 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.oko 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.pWJ ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pWJ 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.20t 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.1eu ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1eu 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.HAj 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.DRb ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.DRb 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.C1y 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:22.796 20:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:24.176 Waiting for block devices as requested 00:34:24.176 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:24.176 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:24.176 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:24.436 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:24.436 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:24.436 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:24.436 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:24.694 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:24.694 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:24.694 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:24.951 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:24.951 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:24.951 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:24.951 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:24.951 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:25.209 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:25.209 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:25.775 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:25.775 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:25.775 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:25.775 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:25.775 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:25.775 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:25.775 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:25.775 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:25.775 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:25.775 No valid GPT data, bailing 00:34:25.775 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:25.775 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:25.775 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:25.776 00:34:25.776 Discovery Log Number of Records 2, Generation counter 2 00:34:25.776 =====Discovery Log Entry 0====== 00:34:25.776 trtype: tcp 00:34:25.776 adrfam: ipv4 00:34:25.776 subtype: current discovery subsystem 00:34:25.776 treq: not specified, sq flow control disable supported 00:34:25.776 portid: 1 00:34:25.776 trsvcid: 4420 00:34:25.776 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:25.776 traddr: 10.0.0.1 00:34:25.776 eflags: none 00:34:25.776 sectype: none 00:34:25.776 =====Discovery Log Entry 1====== 00:34:25.776 trtype: tcp 00:34:25.776 adrfam: ipv4 00:34:25.776 subtype: nvme subsystem 00:34:25.776 treq: not specified, sq flow control disable supported 00:34:25.776 portid: 1 00:34:25.776 trsvcid: 4420 00:34:25.776 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:25.776 traddr: 10.0.0.1 00:34:25.776 eflags: none 00:34:25.776 sectype: none 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.776 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.037 nvme0n1 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.037 20:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.037 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.296 nvme0n1 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.296 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.297 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.555 nvme0n1 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.555 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.556 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.814 nvme0n1 00:34:26.814 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.814 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.814 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.814 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.814 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.814 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.814 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.815 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.073 nvme0n1 00:34:27.073 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.073 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.073 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.074 20:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.074 nvme0n1 00:34:27.074 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.074 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.074 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.074 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.074 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.074 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:27.333 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.591 nvme0n1 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.591 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.850 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.850 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.850 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.850 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.850 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.850 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.850 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:27.850 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.850 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:27.850 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.850 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.851 nvme0n1 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.851 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.110 20:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.110 nvme0n1 00:34:28.110 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.110 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.110 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.110 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.110 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.110 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.369 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.370 nvme0n1 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.370 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.628 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.629 nvme0n1 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.629 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:28.887 20:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.456 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.717 nvme0n1 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:29.717 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.718 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.978 nvme0n1 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.978 20:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.237 nvme0n1 00:34:30.237 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.237 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.237 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.237 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.237 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.237 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.237 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.237 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.237 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.237 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.496 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.496 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.496 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:30.496 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.496 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:30.496 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.497 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.758 nvme0n1 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.758 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.759 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.018 nvme0n1 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:31.018 20:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:32.918 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:32.918 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:32.918 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:32.918 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:32.918 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.918 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:32.918 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:32.918 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:32.918 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.918 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:32.918 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.918 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.919 20:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.177 nvme0n1 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.177 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.436 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.436 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.436 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.436 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.436 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.436 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.436 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.436 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:33.436 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.436 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.696 nvme0n1 00:34:33.696 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.696 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.696 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.696 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.696 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.696 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.696 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.696 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.696 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.696 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.958 20:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.218 nvme0n1 00:34:34.218 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.218 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.218 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.218 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.218 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.479 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.051 nvme0n1 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.051 20:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.624 nvme0n1 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.624 20:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.566 nvme0n1 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.566 20:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.507 nvme0n1 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.507 20:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.447 nvme0n1 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.447 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.448 20:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.385 nvme0n1 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.385 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.321 nvme0n1 00:34:40.321 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.321 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.321 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.321 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.321 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.321 20:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.321 nvme0n1 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.321 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.322 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.582 nvme0n1 00:34:40.582 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.582 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.582 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.582 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.583 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.844 nvme0n1 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.844 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.105 nvme0n1 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.105 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.106 20:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.367 nvme0n1 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.367 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.368 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.368 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.368 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.368 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.368 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.368 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.368 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.368 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.368 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.629 nvme0n1 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.629 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.890 nvme0n1 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:41.890 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.891 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.150 nvme0n1 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:42.150 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.151 20:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.412 nvme0n1 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:42.412 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.413 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.413 nvme0n1 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.673 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.674 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.674 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.674 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.674 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.674 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.674 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.674 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.674 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:42.674 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.674 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.936 nvme0n1 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.936 20:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.199 nvme0n1 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.199 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.459 nvme0n1 00:34:43.459 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.459 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.459 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.459 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.459 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.459 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:43.718 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.719 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.978 nvme0n1 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.978 20:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.237 nvme0n1 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.237 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.809 nvme0n1 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.809 20:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.382 nvme0n1 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.382 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.383 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.383 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.383 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.383 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.383 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.383 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.383 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.383 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.383 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:45.383 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.383 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.952 nvme0n1 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.952 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.953 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.953 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.953 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.953 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.953 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.953 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.953 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.953 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.953 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.953 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.953 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:45.953 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.953 20:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.523 nvme0n1 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:46.523 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.524 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.094 nvme0n1 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.094 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.095 20:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.083 nvme0n1 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.083 20:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.765 nvme0n1 00:34:48.765 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.765 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.765 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.765 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.765 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.765 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.084 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.085 20:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.729 nvme0n1 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:49.729 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.730 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:49.730 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.730 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.987 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.987 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.987 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.987 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.987 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.987 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.987 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.987 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.987 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.987 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.988 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.988 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.988 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:49.988 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.988 20:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.924 nvme0n1 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.924 20:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.493 nvme0n1 00:34:51.493 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.493 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.493 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.493 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.493 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.493 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.753 nvme0n1 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.753 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.754 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.754 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.754 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.013 nvme0n1 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.013 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.014 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.014 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.014 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.014 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:52.014 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.014 20:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.273 nvme0n1 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.273 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.532 nvme0n1 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:52.532 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.533 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.792 nvme0n1 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.792 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.053 nvme0n1 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:53.053 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.054 20:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.313 nvme0n1 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:53.313 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.314 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.572 nvme0n1 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:53.572 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.573 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.833 nvme0n1 00:34:53.833 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.833 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.833 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.834 nvme0n1 00:34:53.834 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.095 20:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.355 nvme0n1 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.355 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.616 nvme0n1 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.616 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.617 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.876 nvme0n1 00:34:54.876 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.876 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.876 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.876 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.876 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.876 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.136 20:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.398 nvme0n1 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.398 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.657 nvme0n1 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.657 20:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.227 nvme0n1 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.227 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.228 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.228 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.228 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.228 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.228 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:56.228 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.228 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.796 nvme0n1 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.796 20:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.404 nvme0n1 00:34:57.404 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.405 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.406 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.406 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.406 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.406 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.406 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.406 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.406 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.406 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.406 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.406 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.406 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.406 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:57.406 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.406 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.976 nvme0n1 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.976 20:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.545 nvme0n1 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZTQ1NzdhMjdjMTgzOGYyZjYwMTU5YWM0NmYzOWZ2a1Di: 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: ]] 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTczOTY3NjA1ZWNjMzljZDU0MDU2MzU3Nzc3YzY1YmNhM2UxNmVjNTIzMDFhYjk1NDhjOGM4ZDc3MGU4MzcwMWAYg2I=: 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.545 20:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.486 nvme0n1 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.486 20:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.425 nvme0n1 00:35:00.425 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.425 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.425 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.425 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.425 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.425 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.425 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.425 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.426 20:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.360 nvme0n1 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkwNzlmN2JlODZhZDA0OTg5NjE4NzQ2NjczNTRmNzhmNmVlMmM3NTVjZmI2YWQ3XOXsAw==: 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: ]] 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UwM2YxY2NkOGRiYjU1ZThmZGVhNTA3M2NjMjI1NDBxmt3y: 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.360 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.929 nvme0n1 00:35:01.929 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.929 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.929 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.929 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.929 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.929 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIxN2FlZDMwNzkzODAyMjRjOTM3MWFiZWQ0YjkxNGJkNGY0ZjM1NjVmOGYyNGFmNzEzNGNhN2M5NWE4NzNlNvth8b0=: 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:02.187 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.188 20:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.124 nvme0n1 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:03.124 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.125 request: 00:35:03.125 { 00:35:03.125 "name": "nvme0", 00:35:03.125 "trtype": "tcp", 00:35:03.125 "traddr": "10.0.0.1", 00:35:03.125 "adrfam": "ipv4", 00:35:03.125 "trsvcid": "4420", 00:35:03.125 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:03.125 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:03.125 "prchk_reftag": false, 00:35:03.125 "prchk_guard": false, 00:35:03.125 "hdgst": false, 00:35:03.125 "ddgst": false, 00:35:03.125 "allow_unrecognized_csi": false, 00:35:03.125 "method": "bdev_nvme_attach_controller", 00:35:03.125 "req_id": 1 00:35:03.125 } 00:35:03.125 Got JSON-RPC error response 00:35:03.125 response: 00:35:03.125 { 00:35:03.125 "code": -5, 00:35:03.125 "message": "Input/output error" 00:35:03.125 } 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.125 20:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.125 request: 00:35:03.125 { 00:35:03.125 "name": "nvme0", 00:35:03.125 "trtype": "tcp", 00:35:03.125 "traddr": "10.0.0.1", 00:35:03.125 "adrfam": "ipv4", 00:35:03.125 "trsvcid": "4420", 00:35:03.125 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:03.125 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:03.125 "prchk_reftag": false, 00:35:03.125 "prchk_guard": false, 00:35:03.125 "hdgst": false, 00:35:03.125 "ddgst": false, 00:35:03.125 "dhchap_key": "key2", 00:35:03.125 "allow_unrecognized_csi": false, 00:35:03.125 "method": "bdev_nvme_attach_controller", 00:35:03.125 "req_id": 1 00:35:03.125 } 00:35:03.125 Got JSON-RPC error response 00:35:03.125 response: 00:35:03.125 { 00:35:03.125 "code": -5, 00:35:03.125 "message": "Input/output error" 00:35:03.125 } 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.125 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.384 request: 00:35:03.384 { 00:35:03.384 "name": "nvme0", 00:35:03.384 "trtype": "tcp", 00:35:03.384 "traddr": "10.0.0.1", 00:35:03.384 "adrfam": "ipv4", 00:35:03.384 "trsvcid": "4420", 00:35:03.384 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:03.384 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:03.384 "prchk_reftag": false, 00:35:03.384 "prchk_guard": false, 00:35:03.384 "hdgst": false, 00:35:03.384 "ddgst": false, 00:35:03.384 "dhchap_key": "key1", 00:35:03.384 "dhchap_ctrlr_key": "ckey2", 00:35:03.384 "allow_unrecognized_csi": false, 00:35:03.384 "method": "bdev_nvme_attach_controller", 00:35:03.384 "req_id": 1 00:35:03.384 } 00:35:03.384 Got JSON-RPC error response 00:35:03.384 response: 00:35:03.384 { 00:35:03.384 "code": -5, 00:35:03.384 "message": "Input/output error" 00:35:03.384 } 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.384 nvme0n1 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:03.384 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.643 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.643 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:03.643 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.644 request: 00:35:03.644 { 00:35:03.644 "name": "nvme0", 00:35:03.644 "dhchap_key": "key1", 00:35:03.644 "dhchap_ctrlr_key": "ckey2", 00:35:03.644 "method": "bdev_nvme_set_keys", 00:35:03.644 "req_id": 1 00:35:03.644 } 00:35:03.644 Got JSON-RPC error response 00:35:03.644 response: 00:35:03.644 { 00:35:03.644 "code": -13, 00:35:03.644 "message": "Permission denied" 00:35:03.644 } 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:03.644 20:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:04.584 20:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.584 20:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:04.585 20:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.585 20:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.585 20:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.585 20:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:04.585 20:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.968 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY4MWFhMGRiMGQxNDAyNGRkZjljNmNiYjZjODMxZGIwOTE0NWQ0NmFmMzdhN2U1aaT4Pg==: 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: ]] 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExNzNmMTE0OGEwNGQxMWVjOWVmNmEzMjYyMWVkMWFlZDdlMDQ4ZGI4Njk4ODJkvX14zA==: 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.969 nvme0n1 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTEwMzQwZTc5NDZiMTVkZDA0MmIzNDM4MWFiYmQxNma4QkMw: 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: ]] 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhYTc0OTFhMjlmNmQ5Y2ExZDgwZmU0YTlmNWQ5MTVB5J8O: 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.969 request: 00:35:05.969 { 00:35:05.969 "name": "nvme0", 00:35:05.969 "dhchap_key": "key2", 00:35:05.969 "dhchap_ctrlr_key": "ckey1", 00:35:05.969 "method": "bdev_nvme_set_keys", 00:35:05.969 "req_id": 1 00:35:05.969 } 00:35:05.969 Got JSON-RPC error response 00:35:05.969 response: 00:35:05.969 { 00:35:05.969 "code": -13, 00:35:05.969 "message": "Permission denied" 00:35:05.969 } 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:05.969 20:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:06.906 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:06.906 rmmod nvme_tcp 00:35:07.165 rmmod nvme_fabrics 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 384095 ']' 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 384095 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 384095 ']' 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 384095 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 384095 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 384095' 00:35:07.165 killing process with pid 384095 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 384095 00:35:07.165 20:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 384095 00:35:07.165 20:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:07.165 20:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:07.165 20:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:07.165 20:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:35:07.165 20:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:35:07.165 20:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:07.165 20:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:35:07.165 20:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:07.165 20:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:07.165 20:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.165 20:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:07.165 20:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.697 20:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:09.697 20:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:09.697 20:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:09.697 20:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:09.697 20:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:09.697 20:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:35:09.697 20:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:09.697 20:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:09.697 20:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:09.697 20:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:09.697 20:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:09.697 20:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:09.697 20:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:10.635 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:10.635 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:10.635 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:10.635 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:10.635 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:10.635 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:10.635 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:10.635 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:10.635 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:10.635 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:10.635 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:10.635 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:10.635 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:10.635 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:10.635 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:10.635 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:11.573 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:11.831 20:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.tje /tmp/spdk.key-null.oko /tmp/spdk.key-sha256.20t /tmp/spdk.key-sha384.HAj /tmp/spdk.key-sha512.C1y /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:11.831 20:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:13.207 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:13.207 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:13.207 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:13.207 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:13.207 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:13.207 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:13.207 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:13.207 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:13.207 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:13.207 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:13.207 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:13.207 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:13.207 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:13.207 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:13.207 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:13.207 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:13.207 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:13.207 00:35:13.207 real 0m53.860s 00:35:13.207 user 0m51.382s 00:35:13.207 sys 0m6.220s 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.207 ************************************ 00:35:13.207 END TEST nvmf_auth_host 00:35:13.207 ************************************ 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.207 ************************************ 00:35:13.207 START TEST nvmf_digest 00:35:13.207 ************************************ 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:13.207 * Looking for test storage... 00:35:13.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:13.207 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:13.208 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:13.208 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:13.208 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:13.208 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:13.208 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:13.208 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:13.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.466 --rc genhtml_branch_coverage=1 00:35:13.466 --rc genhtml_function_coverage=1 00:35:13.466 --rc genhtml_legend=1 00:35:13.466 --rc geninfo_all_blocks=1 00:35:13.466 --rc geninfo_unexecuted_blocks=1 00:35:13.466 00:35:13.466 ' 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:13.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.466 --rc genhtml_branch_coverage=1 00:35:13.466 --rc genhtml_function_coverage=1 00:35:13.466 --rc genhtml_legend=1 00:35:13.466 --rc geninfo_all_blocks=1 00:35:13.466 --rc geninfo_unexecuted_blocks=1 00:35:13.466 00:35:13.466 ' 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:13.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.466 --rc genhtml_branch_coverage=1 00:35:13.466 --rc genhtml_function_coverage=1 00:35:13.466 --rc genhtml_legend=1 00:35:13.466 --rc geninfo_all_blocks=1 00:35:13.466 --rc geninfo_unexecuted_blocks=1 00:35:13.466 00:35:13.466 ' 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:13.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.466 --rc genhtml_branch_coverage=1 00:35:13.466 --rc genhtml_function_coverage=1 00:35:13.466 --rc genhtml_legend=1 00:35:13.466 --rc geninfo_all_blocks=1 00:35:13.466 --rc geninfo_unexecuted_blocks=1 00:35:13.466 00:35:13.466 ' 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:13.466 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:13.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:13.467 20:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:15.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:15.373 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:15.373 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:15.373 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:15.373 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:15.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:15.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:35:15.373 00:35:15.374 --- 10.0.0.2 ping statistics --- 00:35:15.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.374 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:15.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:15.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:35:15.374 00:35:15.374 --- 10.0.0.1 ping statistics --- 00:35:15.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.374 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:15.374 ************************************ 00:35:15.374 START TEST nvmf_digest_clean 00:35:15.374 ************************************ 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:15.374 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:15.633 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=394716 00:35:15.633 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:15.633 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 394716 00:35:15.633 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 394716 ']' 00:35:15.633 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.633 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.633 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.633 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.633 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:15.633 [2024-11-18 20:36:27.433008] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:15.633 [2024-11-18 20:36:27.433087] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:15.633 [2024-11-18 20:36:27.505664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.633 [2024-11-18 20:36:27.553378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:15.633 [2024-11-18 20:36:27.553448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:15.633 [2024-11-18 20:36:27.553462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:15.633 [2024-11-18 20:36:27.553473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:15.633 [2024-11-18 20:36:27.553482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:15.633 [2024-11-18 20:36:27.554086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.891 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.891 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:15.891 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:15.891 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:15.891 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:15.891 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:15.891 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:15.891 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:15.891 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:15.891 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:15.892 null0 00:35:15.892 [2024-11-18 20:36:27.783802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:15.892 [2024-11-18 20:36:27.808039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=394739 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 394739 /var/tmp/bperf.sock 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 394739 ']' 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:15.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.892 20:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:15.892 [2024-11-18 20:36:27.853806] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:15.892 [2024-11-18 20:36:27.853868] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394739 ] 00:35:16.150 [2024-11-18 20:36:27.923007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.150 [2024-11-18 20:36:27.970195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.150 20:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:16.150 20:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:16.150 20:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:16.150 20:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:16.150 20:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:16.716 20:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:16.716 20:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:16.974 nvme0n1 00:35:16.974 20:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:16.974 20:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:17.234 Running I/O for 2 seconds... 00:35:19.108 18488.00 IOPS, 72.22 MiB/s [2024-11-18T19:36:31.116Z] 18417.00 IOPS, 71.94 MiB/s 00:35:19.108 Latency(us) 00:35:19.108 [2024-11-18T19:36:31.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.108 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:19.108 nvme0n1 : 2.00 18442.64 72.04 0.00 0.00 6932.63 3082.62 20680.25 00:35:19.108 [2024-11-18T19:36:31.116Z] =================================================================================================================== 00:35:19.108 [2024-11-18T19:36:31.116Z] Total : 18442.64 72.04 0.00 0.00 6932.63 3082.62 20680.25 00:35:19.108 { 00:35:19.108 "results": [ 00:35:19.108 { 00:35:19.108 "job": "nvme0n1", 00:35:19.108 "core_mask": "0x2", 00:35:19.108 "workload": "randread", 00:35:19.108 "status": "finished", 00:35:19.108 "queue_depth": 128, 00:35:19.108 "io_size": 4096, 00:35:19.108 "runtime": 2.00416, 00:35:19.108 "iops": 18442.63931023471, 00:35:19.108 "mibps": 72.04155980560434, 00:35:19.108 "io_failed": 0, 00:35:19.108 "io_timeout": 0, 00:35:19.108 "avg_latency_us": 6932.634342778468, 00:35:19.108 "min_latency_us": 3082.6192592592593, 00:35:19.108 "max_latency_us": 20680.248888888887 00:35:19.108 } 00:35:19.108 ], 00:35:19.108 "core_count": 1 00:35:19.108 } 00:35:19.108 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:19.108 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:19.108 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:19.108 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:19.108 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:19.108 | select(.opcode=="crc32c") 00:35:19.108 | "\(.module_name) \(.executed)"' 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 394739 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 394739 ']' 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 394739 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394739 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394739' 00:35:19.368 killing process with pid 394739 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 394739 00:35:19.368 Received shutdown signal, test time was about 2.000000 seconds 00:35:19.368 00:35:19.368 Latency(us) 00:35:19.368 [2024-11-18T19:36:31.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.368 [2024-11-18T19:36:31.376Z] =================================================================================================================== 00:35:19.368 [2024-11-18T19:36:31.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:19.368 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 394739 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=395152 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 395152 /var/tmp/bperf.sock 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 395152 ']' 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:19.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.627 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:19.627 [2024-11-18 20:36:31.582825] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:19.627 [2024-11-18 20:36:31.582922] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395152 ] 00:35:19.627 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:19.627 Zero copy mechanism will not be used. 00:35:19.886 [2024-11-18 20:36:31.648027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.886 [2024-11-18 20:36:31.692478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.886 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.886 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:19.886 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:19.886 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:19.886 20:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:20.453 20:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:20.453 20:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:20.711 nvme0n1 00:35:20.711 20:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:20.711 20:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:20.970 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:20.970 Zero copy mechanism will not be used. 00:35:20.970 Running I/O for 2 seconds... 00:35:22.848 6058.00 IOPS, 757.25 MiB/s [2024-11-18T19:36:34.856Z] 5957.00 IOPS, 744.62 MiB/s 00:35:22.848 Latency(us) 00:35:22.848 [2024-11-18T19:36:34.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.848 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:22.848 nvme0n1 : 2.00 5954.86 744.36 0.00 0.00 2682.81 667.50 5873.97 00:35:22.848 [2024-11-18T19:36:34.856Z] =================================================================================================================== 00:35:22.848 [2024-11-18T19:36:34.856Z] Total : 5954.86 744.36 0.00 0.00 2682.81 667.50 5873.97 00:35:22.848 { 00:35:22.848 "results": [ 00:35:22.848 { 00:35:22.848 "job": "nvme0n1", 00:35:22.848 "core_mask": "0x2", 00:35:22.848 "workload": "randread", 00:35:22.848 "status": "finished", 00:35:22.848 "queue_depth": 16, 00:35:22.848 "io_size": 131072, 00:35:22.848 "runtime": 2.003405, 00:35:22.848 "iops": 5954.861847704284, 00:35:22.848 "mibps": 744.3577309630355, 00:35:22.848 "io_failed": 0, 00:35:22.848 "io_timeout": 0, 00:35:22.848 "avg_latency_us": 2682.8053561826705, 00:35:22.848 "min_latency_us": 667.4962962962964, 00:35:22.848 "max_latency_us": 5873.967407407407 00:35:22.848 } 00:35:22.848 ], 00:35:22.848 "core_count": 1 00:35:22.848 } 00:35:22.848 20:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:22.848 20:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:22.848 20:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:22.848 20:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:22.848 20:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:22.848 | select(.opcode=="crc32c") 00:35:22.848 | "\(.module_name) \(.executed)"' 00:35:23.106 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:23.106 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:23.106 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:23.106 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:23.107 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 395152 00:35:23.107 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 395152 ']' 00:35:23.107 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 395152 00:35:23.107 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:23.107 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.107 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395152 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395152' 00:35:23.365 killing process with pid 395152 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 395152 00:35:23.365 Received shutdown signal, test time was about 2.000000 seconds 00:35:23.365 00:35:23.365 Latency(us) 00:35:23.365 [2024-11-18T19:36:35.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.365 [2024-11-18T19:36:35.373Z] =================================================================================================================== 00:35:23.365 [2024-11-18T19:36:35.373Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 395152 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=395677 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 395677 /var/tmp/bperf.sock 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 395677 ']' 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:23.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:23.365 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:23.365 [2024-11-18 20:36:35.344666] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:23.365 [2024-11-18 20:36:35.344774] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395677 ] 00:35:23.624 [2024-11-18 20:36:35.411173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.624 [2024-11-18 20:36:35.460583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:23.624 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:23.624 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:23.624 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:23.624 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:23.624 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:24.191 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:24.191 20:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:24.450 nvme0n1 00:35:24.450 20:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:24.450 20:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:24.450 Running I/O for 2 seconds... 00:35:26.766 17994.00 IOPS, 70.29 MiB/s [2024-11-18T19:36:38.774Z] 18029.00 IOPS, 70.43 MiB/s 00:35:26.766 Latency(us) 00:35:26.766 [2024-11-18T19:36:38.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.766 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:26.766 nvme0n1 : 2.01 18029.96 70.43 0.00 0.00 7083.11 5582.70 12524.66 00:35:26.766 [2024-11-18T19:36:38.774Z] =================================================================================================================== 00:35:26.766 [2024-11-18T19:36:38.774Z] Total : 18029.96 70.43 0.00 0.00 7083.11 5582.70 12524.66 00:35:26.766 { 00:35:26.766 "results": [ 00:35:26.766 { 00:35:26.766 "job": "nvme0n1", 00:35:26.766 "core_mask": "0x2", 00:35:26.766 "workload": "randwrite", 00:35:26.766 "status": "finished", 00:35:26.766 "queue_depth": 128, 00:35:26.766 "io_size": 4096, 00:35:26.767 "runtime": 2.006549, 00:35:26.767 "iops": 18029.96089305569, 00:35:26.767 "mibps": 70.42953473849879, 00:35:26.767 "io_failed": 0, 00:35:26.767 "io_timeout": 0, 00:35:26.767 "avg_latency_us": 7083.113269656411, 00:35:26.767 "min_latency_us": 5582.696296296296, 00:35:26.767 "max_latency_us": 12524.657777777778 00:35:26.767 } 00:35:26.767 ], 00:35:26.767 "core_count": 1 00:35:26.767 } 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:26.767 | select(.opcode=="crc32c") 00:35:26.767 | "\(.module_name) \(.executed)"' 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 395677 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 395677 ']' 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 395677 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395677 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395677' 00:35:26.767 killing process with pid 395677 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 395677 00:35:26.767 Received shutdown signal, test time was about 2.000000 seconds 00:35:26.767 00:35:26.767 Latency(us) 00:35:26.767 [2024-11-18T19:36:38.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.767 [2024-11-18T19:36:38.775Z] =================================================================================================================== 00:35:26.767 [2024-11-18T19:36:38.775Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:26.767 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 395677 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=396081 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 396081 /var/tmp/bperf.sock 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 396081 ']' 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:27.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.026 20:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:27.026 [2024-11-18 20:36:38.995308] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:27.026 [2024-11-18 20:36:38.995408] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396081 ] 00:35:27.026 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:27.026 Zero copy mechanism will not be used. 00:35:27.284 [2024-11-18 20:36:39.061703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.284 [2024-11-18 20:36:39.107184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.284 20:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.284 20:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:27.284 20:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:27.284 20:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:27.284 20:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:27.851 20:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:27.851 20:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:28.109 nvme0n1 00:35:28.109 20:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:28.109 20:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:28.369 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:28.369 Zero copy mechanism will not be used. 00:35:28.369 Running I/O for 2 seconds... 00:35:30.245 5611.00 IOPS, 701.38 MiB/s [2024-11-18T19:36:42.253Z] 5837.00 IOPS, 729.62 MiB/s 00:35:30.245 Latency(us) 00:35:30.245 [2024-11-18T19:36:42.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.245 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:30.245 nvme0n1 : 2.00 5833.27 729.16 0.00 0.00 2735.56 2002.49 8009.96 00:35:30.245 [2024-11-18T19:36:42.253Z] =================================================================================================================== 00:35:30.245 [2024-11-18T19:36:42.253Z] Total : 5833.27 729.16 0.00 0.00 2735.56 2002.49 8009.96 00:35:30.245 { 00:35:30.245 "results": [ 00:35:30.245 { 00:35:30.245 "job": "nvme0n1", 00:35:30.245 "core_mask": "0x2", 00:35:30.245 "workload": "randwrite", 00:35:30.245 "status": "finished", 00:35:30.245 "queue_depth": 16, 00:35:30.245 "io_size": 131072, 00:35:30.245 "runtime": 2.004708, 00:35:30.245 "iops": 5833.268485983994, 00:35:30.245 "mibps": 729.1585607479992, 00:35:30.245 "io_failed": 0, 00:35:30.245 "io_timeout": 0, 00:35:30.245 "avg_latency_us": 2735.5615564803734, 00:35:30.245 "min_latency_us": 2002.4888888888888, 00:35:30.245 "max_latency_us": 8009.955555555555 00:35:30.245 } 00:35:30.245 ], 00:35:30.245 "core_count": 1 00:35:30.245 } 00:35:30.245 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:30.245 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:30.245 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:30.245 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:30.245 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:30.245 | select(.opcode=="crc32c") 00:35:30.245 | "\(.module_name) \(.executed)"' 00:35:30.504 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:30.504 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:30.504 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:30.504 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:30.504 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 396081 00:35:30.504 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 396081 ']' 00:35:30.504 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 396081 00:35:30.504 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:30.504 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.504 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396081 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396081' 00:35:30.764 killing process with pid 396081 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 396081 00:35:30.764 Received shutdown signal, test time was about 2.000000 seconds 00:35:30.764 00:35:30.764 Latency(us) 00:35:30.764 [2024-11-18T19:36:42.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.764 [2024-11-18T19:36:42.772Z] =================================================================================================================== 00:35:30.764 [2024-11-18T19:36:42.772Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 396081 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 394716 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 394716 ']' 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 394716 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394716 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394716' 00:35:30.764 killing process with pid 394716 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 394716 00:35:30.764 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 394716 00:35:31.024 00:35:31.024 real 0m15.576s 00:35:31.024 user 0m31.397s 00:35:31.024 sys 0m4.208s 00:35:31.024 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:31.024 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:31.024 ************************************ 00:35:31.024 END TEST nvmf_digest_clean 00:35:31.024 ************************************ 00:35:31.024 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:31.024 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:31.024 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:31.024 20:36:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:31.024 ************************************ 00:35:31.024 START TEST nvmf_digest_error 00:35:31.024 ************************************ 00:35:31.024 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:31.024 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:31.024 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:31.024 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:31.024 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.024 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=396531 00:35:31.024 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:31.024 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 396531 00:35:31.024 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 396531 ']' 00:35:31.024 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.024 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:31.024 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.024 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:31.024 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.282 [2024-11-18 20:36:43.064769] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:31.282 [2024-11-18 20:36:43.064844] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:31.282 [2024-11-18 20:36:43.135991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.282 [2024-11-18 20:36:43.178664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:31.282 [2024-11-18 20:36:43.178724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:31.282 [2024-11-18 20:36:43.178737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:31.282 [2024-11-18 20:36:43.178748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:31.282 [2024-11-18 20:36:43.178758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:31.282 [2024-11-18 20:36:43.179286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.541 [2024-11-18 20:36:43.352092] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.541 null0 00:35:31.541 [2024-11-18 20:36:43.458052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:31.541 [2024-11-18 20:36:43.482264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=396663 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 396663 /var/tmp/bperf.sock 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 396663 ']' 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:31.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:31.541 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.541 [2024-11-18 20:36:43.529129] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:31.541 [2024-11-18 20:36:43.529209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396663 ] 00:35:31.799 [2024-11-18 20:36:43.594294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.799 [2024-11-18 20:36:43.639110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.799 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:31.799 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:31.799 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:31.799 20:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:32.058 20:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:32.058 20:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.058 20:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:32.058 20:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.058 20:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:32.058 20:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:32.625 nvme0n1 00:35:32.626 20:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:32.626 20:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.626 20:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:32.626 20:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.626 20:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:32.626 20:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:32.886 Running I/O for 2 seconds... 00:35:32.886 [2024-11-18 20:36:44.659603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.886 [2024-11-18 20:36:44.659674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.886 [2024-11-18 20:36:44.659699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.886 [2024-11-18 20:36:44.674250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.886 [2024-11-18 20:36:44.674283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.886 [2024-11-18 20:36:44.674302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.886 [2024-11-18 20:36:44.686287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.886 [2024-11-18 20:36:44.686319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.886 [2024-11-18 20:36:44.686336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.886 [2024-11-18 20:36:44.702988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.886 [2024-11-18 20:36:44.703019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.886 [2024-11-18 20:36:44.703036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.886 [2024-11-18 20:36:44.716523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.886 [2024-11-18 20:36:44.716577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.886 [2024-11-18 20:36:44.716598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.886 [2024-11-18 20:36:44.727616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.886 [2024-11-18 20:36:44.727667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.886 [2024-11-18 20:36:44.727686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.886 [2024-11-18 20:36:44.742378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.886 [2024-11-18 20:36:44.742411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.886 [2024-11-18 20:36:44.742455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.886 [2024-11-18 20:36:44.754011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.886 [2024-11-18 20:36:44.754042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.886 [2024-11-18 20:36:44.754074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.886 [2024-11-18 20:36:44.767359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.886 [2024-11-18 20:36:44.767390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.886 [2024-11-18 20:36:44.767407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.886 [2024-11-18 20:36:44.780530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.886 [2024-11-18 20:36:44.780582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.886 [2024-11-18 20:36:44.780610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.886 [2024-11-18 20:36:44.795248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.886 [2024-11-18 20:36:44.795279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.886 [2024-11-18 20:36:44.795295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.886 [2024-11-18 20:36:44.806243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.886 [2024-11-18 20:36:44.806273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.886 [2024-11-18 20:36:44.806290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.886 [2024-11-18 20:36:44.822584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.886 [2024-11-18 20:36:44.822614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.886 [2024-11-18 20:36:44.822655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.887 [2024-11-18 20:36:44.837496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.887 [2024-11-18 20:36:44.837527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.887 [2024-11-18 20:36:44.837544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.887 [2024-11-18 20:36:44.851269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.887 [2024-11-18 20:36:44.851299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.887 [2024-11-18 20:36:44.851315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.887 [2024-11-18 20:36:44.863925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.887 [2024-11-18 20:36:44.863977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.887 [2024-11-18 20:36:44.864009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.887 [2024-11-18 20:36:44.876123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.887 [2024-11-18 20:36:44.876160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.887 [2024-11-18 20:36:44.876191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.887 [2024-11-18 20:36:44.890615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:32.887 [2024-11-18 20:36:44.890667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.887 [2024-11-18 20:36:44.890687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.146 [2024-11-18 20:36:44.903649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.146 [2024-11-18 20:36:44.903685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.146 [2024-11-18 20:36:44.903704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.146 [2024-11-18 20:36:44.920662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.146 [2024-11-18 20:36:44.920693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.146 [2024-11-18 20:36:44.920710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.146 [2024-11-18 20:36:44.933208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.146 [2024-11-18 20:36:44.933252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:44.933277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:44.944709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:44.944744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:44.944764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:44.957746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:44.957777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:44.957794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:44.971556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:44.971610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:44.971656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:44.983328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:44.983358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:44.983374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:44.996458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:44.996488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:44.996505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:45.009474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:45.009505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:45.009522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:45.023403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:45.023433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:45.023450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:45.036165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:45.036194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:45.036211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:45.048988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:45.049032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:45.049048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:45.063795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:45.063827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:45.063845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:45.079265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:45.079295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:45.079312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:45.093272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:45.093314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:45.093351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:45.104381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:45.104411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:45.104427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:45.121224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:45.121268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:45.121285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:45.135776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:45.135816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:45.135847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.147 [2024-11-18 20:36:45.149098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.147 [2024-11-18 20:36:45.149128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.147 [2024-11-18 20:36:45.149145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.160520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.160567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.160585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.175269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.175299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.175316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.191149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.191180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.191196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.208098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.208128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.208144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.221938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.221988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.222006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.237392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.237422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.237439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.249543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.249574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.249591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.263701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.263732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.263749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.275454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.275484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.275516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.290040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.290071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.290087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.306292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.306322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.306339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.320167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.320205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.320225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.331490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.331535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.331552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.347201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.347253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.347271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.362562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.362604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.362624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.379043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.379073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.407 [2024-11-18 20:36:45.379105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.407 [2024-11-18 20:36:45.394242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.407 [2024-11-18 20:36:45.394289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.408 [2024-11-18 20:36:45.394305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.408 [2024-11-18 20:36:45.410055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.408 [2024-11-18 20:36:45.410090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.408 [2024-11-18 20:36:45.410119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.421745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.666 [2024-11-18 20:36:45.421778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.666 [2024-11-18 20:36:45.421797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.436159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.666 [2024-11-18 20:36:45.436188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.666 [2024-11-18 20:36:45.436220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.450023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.666 [2024-11-18 20:36:45.450069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.666 [2024-11-18 20:36:45.450086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.464598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.666 [2024-11-18 20:36:45.464655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.666 [2024-11-18 20:36:45.464679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.476393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.666 [2024-11-18 20:36:45.476422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.666 [2024-11-18 20:36:45.476455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.490567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.666 [2024-11-18 20:36:45.490613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.666 [2024-11-18 20:36:45.490632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.506347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.666 [2024-11-18 20:36:45.506377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.666 [2024-11-18 20:36:45.506409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.518578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.666 [2024-11-18 20:36:45.518608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.666 [2024-11-18 20:36:45.518648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.532977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.666 [2024-11-18 20:36:45.533006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.666 [2024-11-18 20:36:45.533037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.548377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.666 [2024-11-18 20:36:45.548422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.666 [2024-11-18 20:36:45.548439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.562451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.666 [2024-11-18 20:36:45.562482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.666 [2024-11-18 20:36:45.562515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.578161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.666 [2024-11-18 20:36:45.578200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.666 [2024-11-18 20:36:45.578245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.590133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.666 [2024-11-18 20:36:45.590161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.666 [2024-11-18 20:36:45.590192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.604410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.666 [2024-11-18 20:36:45.604438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.666 [2024-11-18 20:36:45.604470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.666 [2024-11-18 20:36:45.617875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.667 [2024-11-18 20:36:45.617907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.667 [2024-11-18 20:36:45.617938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.667 [2024-11-18 20:36:45.628756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.667 [2024-11-18 20:36:45.628789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.667 [2024-11-18 20:36:45.628807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.667 18257.00 IOPS, 71.32 MiB/s [2024-11-18T19:36:45.675Z] [2024-11-18 20:36:45.642840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.667 [2024-11-18 20:36:45.642871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.667 [2024-11-18 20:36:45.642887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.667 [2024-11-18 20:36:45.655300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.667 [2024-11-18 20:36:45.655330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.667 [2024-11-18 20:36:45.655364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.667 [2024-11-18 20:36:45.668452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.667 [2024-11-18 20:36:45.668483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.667 [2024-11-18 20:36:45.668515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.679789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.679820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.679837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.695428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.695458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.695479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.711288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.711335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.711352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.726057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.726092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.726126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.736830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.736877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.736895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.752044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.752090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.752115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.762604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.762655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.762672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.777154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.777182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.777213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.792165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.792194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.792225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.809259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.809288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.809319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.822663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.822699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.822717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.834737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.834769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.834787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.848387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.848420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.848438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.860328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.860357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.860389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.873497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.873525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.873557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.888768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.888799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.888815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.905722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.905751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.905768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.925 [2024-11-18 20:36:45.919422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:33.925 [2024-11-18 20:36:45.919451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.925 [2024-11-18 20:36:45.919482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:45.934775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:45.934817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:45.934846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:45.945870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:45.945919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:45.945947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:45.960801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:45.960841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:45.960862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:45.972335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:45.972364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:45.972394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:45.986516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:45.986546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:45.986578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:46.001396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:46.001426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:46.001457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:46.012669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:46.012698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:46.012714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:46.025503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:46.025532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:46.025564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:46.039216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:46.039245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:46.039277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:46.053526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:46.053561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:46.053599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:46.066324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:46.066371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:46.066390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:46.082092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:46.082121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:46.082152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:46.097127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:46.097171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:46.097188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:46.108803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:46.108833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:46.108850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:46.124596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:46.124629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:46.124656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:46.139559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:46.139590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:46.139622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:46.153679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:46.153712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:46.153730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:46.165715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:46.165762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:46.165779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.184 [2024-11-18 20:36:46.178633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.184 [2024-11-18 20:36:46.178683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.184 [2024-11-18 20:36:46.178714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.443 [2024-11-18 20:36:46.193577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.443 [2024-11-18 20:36:46.193622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.443 [2024-11-18 20:36:46.193647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.443 [2024-11-18 20:36:46.205123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.443 [2024-11-18 20:36:46.205155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.443 [2024-11-18 20:36:46.205189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.443 [2024-11-18 20:36:46.218037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.443 [2024-11-18 20:36:46.218075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.443 [2024-11-18 20:36:46.218106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.443 [2024-11-18 20:36:46.232148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.443 [2024-11-18 20:36:46.232180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.443 [2024-11-18 20:36:46.232213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.443 [2024-11-18 20:36:46.243985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.443 [2024-11-18 20:36:46.244017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.443 [2024-11-18 20:36:46.244057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.443 [2024-11-18 20:36:46.256088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.443 [2024-11-18 20:36:46.256119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.443 [2024-11-18 20:36:46.256153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.443 [2024-11-18 20:36:46.276178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.443 [2024-11-18 20:36:46.276227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.443 [2024-11-18 20:36:46.276246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.443 [2024-11-18 20:36:46.288403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.443 [2024-11-18 20:36:46.288436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.443 [2024-11-18 20:36:46.288475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.443 [2024-11-18 20:36:46.303918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.443 [2024-11-18 20:36:46.303963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.443 [2024-11-18 20:36:46.303978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.443 [2024-11-18 20:36:46.315583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.443 [2024-11-18 20:36:46.315630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.444 [2024-11-18 20:36:46.315658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.444 [2024-11-18 20:36:46.329162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.444 [2024-11-18 20:36:46.329194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.444 [2024-11-18 20:36:46.329229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.444 [2024-11-18 20:36:46.343497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.444 [2024-11-18 20:36:46.343527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.444 [2024-11-18 20:36:46.343560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.444 [2024-11-18 20:36:46.358985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.444 [2024-11-18 20:36:46.359022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.444 [2024-11-18 20:36:46.359041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.444 [2024-11-18 20:36:46.371106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.444 [2024-11-18 20:36:46.371141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.444 [2024-11-18 20:36:46.371161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.444 [2024-11-18 20:36:46.385547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.444 [2024-11-18 20:36:46.385593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.444 [2024-11-18 20:36:46.385610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.444 [2024-11-18 20:36:46.399287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.444 [2024-11-18 20:36:46.399319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.444 [2024-11-18 20:36:46.399352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.444 [2024-11-18 20:36:46.412902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.444 [2024-11-18 20:36:46.412952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.444 [2024-11-18 20:36:46.412969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.444 [2024-11-18 20:36:46.426022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.444 [2024-11-18 20:36:46.426071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.444 [2024-11-18 20:36:46.426088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.444 [2024-11-18 20:36:46.438631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.444 [2024-11-18 20:36:46.438687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.444 [2024-11-18 20:36:46.438704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.702 [2024-11-18 20:36:46.454115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.702 [2024-11-18 20:36:46.454158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.702 [2024-11-18 20:36:46.454189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.702 [2024-11-18 20:36:46.466261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.702 [2024-11-18 20:36:46.466291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.702 [2024-11-18 20:36:46.466322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.702 [2024-11-18 20:36:46.481007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.702 [2024-11-18 20:36:46.481040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.702 [2024-11-18 20:36:46.481058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.702 [2024-11-18 20:36:46.496371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.702 [2024-11-18 20:36:46.496403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.702 [2024-11-18 20:36:46.496446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.703 [2024-11-18 20:36:46.508509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.703 [2024-11-18 20:36:46.508542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.703 [2024-11-18 20:36:46.508563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.703 [2024-11-18 20:36:46.521086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.703 [2024-11-18 20:36:46.521138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.703 [2024-11-18 20:36:46.521159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.703 [2024-11-18 20:36:46.534554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.703 [2024-11-18 20:36:46.534584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.703 [2024-11-18 20:36:46.534615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.703 [2024-11-18 20:36:46.551297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.703 [2024-11-18 20:36:46.551328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.703 [2024-11-18 20:36:46.551361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.703 [2024-11-18 20:36:46.566955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.703 [2024-11-18 20:36:46.567001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.703 [2024-11-18 20:36:46.567029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.703 [2024-11-18 20:36:46.579923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.703 [2024-11-18 20:36:46.579971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.703 [2024-11-18 20:36:46.579988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.703 [2024-11-18 20:36:46.594152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.703 [2024-11-18 20:36:46.594199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.703 [2024-11-18 20:36:46.594218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.703 [2024-11-18 20:36:46.604753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.703 [2024-11-18 20:36:46.604798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.703 [2024-11-18 20:36:46.604815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.703 [2024-11-18 20:36:46.618837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.703 [2024-11-18 20:36:46.618869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.703 [2024-11-18 20:36:46.618902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.703 [2024-11-18 20:36:46.632907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.703 [2024-11-18 20:36:46.632936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.703 [2024-11-18 20:36:46.632968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.703 18432.50 IOPS, 72.00 MiB/s [2024-11-18T19:36:46.711Z] [2024-11-18 20:36:46.643773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eee3f0) 00:35:34.703 [2024-11-18 20:36:46.643823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.703 [2024-11-18 20:36:46.643842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.703 00:35:34.703 Latency(us) 00:35:34.703 [2024-11-18T19:36:46.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.703 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:34.703 nvme0n1 : 2.01 18446.61 72.06 0.00 0.00 6929.23 3495.25 23398.78 00:35:34.703 [2024-11-18T19:36:46.711Z] =================================================================================================================== 00:35:34.703 [2024-11-18T19:36:46.711Z] Total : 18446.61 72.06 0.00 0.00 6929.23 3495.25 23398.78 00:35:34.703 { 00:35:34.703 "results": [ 00:35:34.703 { 00:35:34.703 "job": "nvme0n1", 00:35:34.703 "core_mask": "0x2", 00:35:34.703 "workload": "randread", 00:35:34.703 "status": "finished", 00:35:34.703 "queue_depth": 128, 00:35:34.703 "io_size": 4096, 00:35:34.703 "runtime": 2.005409, 00:35:34.703 "iops": 18446.611140171408, 00:35:34.703 "mibps": 72.05707476629456, 00:35:34.703 "io_failed": 0, 00:35:34.703 "io_timeout": 0, 00:35:34.703 "avg_latency_us": 6929.228425718178, 00:35:34.703 "min_latency_us": 3495.2533333333336, 00:35:34.703 "max_latency_us": 23398.77925925926 00:35:34.703 } 00:35:34.703 ], 00:35:34.703 "core_count": 1 00:35:34.703 } 00:35:34.703 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:34.703 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:34.703 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:34.703 | .driver_specific 00:35:34.703 | .nvme_error 00:35:34.703 | .status_code 00:35:34.703 | .command_transient_transport_error' 00:35:34.703 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:34.961 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:35:34.961 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 396663 00:35:34.961 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 396663 ']' 00:35:34.961 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 396663 00:35:34.961 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:34.961 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:34.961 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396663 00:35:35.219 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:35.219 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:35.219 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396663' 00:35:35.219 killing process with pid 396663 00:35:35.219 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 396663 00:35:35.219 Received shutdown signal, test time was about 2.000000 seconds 00:35:35.219 00:35:35.219 Latency(us) 00:35:35.219 [2024-11-18T19:36:47.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:35.219 [2024-11-18T19:36:47.227Z] =================================================================================================================== 00:35:35.219 [2024-11-18T19:36:47.227Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:35.219 20:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 396663 00:35:35.219 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:35.219 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:35.219 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:35.219 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:35.219 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:35.219 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=397065 00:35:35.219 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:35.219 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 397065 /var/tmp/bperf.sock 00:35:35.219 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 397065 ']' 00:35:35.219 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:35.219 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.219 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:35.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:35.220 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.220 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:35.220 [2024-11-18 20:36:47.214123] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:35.220 [2024-11-18 20:36:47.214207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397065 ] 00:35:35.220 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:35.220 Zero copy mechanism will not be used. 00:35:35.478 [2024-11-18 20:36:47.279106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.478 [2024-11-18 20:36:47.322847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.478 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:35.478 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:35.478 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:35.478 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:35.736 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:35.736 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.736 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:35.736 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.736 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:35.736 20:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:36.302 nvme0n1 00:35:36.302 20:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:36.302 20:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.302 20:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:36.302 20:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.302 20:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:36.302 20:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:36.561 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:36.561 Zero copy mechanism will not be used. 00:35:36.561 Running I/O for 2 seconds... 00:35:36.561 [2024-11-18 20:36:48.383147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.561 [2024-11-18 20:36:48.383208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.561 [2024-11-18 20:36:48.383229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.561 [2024-11-18 20:36:48.388933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.561 [2024-11-18 20:36:48.388967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.561 [2024-11-18 20:36:48.388999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.561 [2024-11-18 20:36:48.394196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.561 [2024-11-18 20:36:48.394226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.561 [2024-11-18 20:36:48.394259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.561 [2024-11-18 20:36:48.399577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.561 [2024-11-18 20:36:48.399632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.561 [2024-11-18 20:36:48.399656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.561 [2024-11-18 20:36:48.405151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.561 [2024-11-18 20:36:48.405179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.561 [2024-11-18 20:36:48.405195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.561 [2024-11-18 20:36:48.410189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.561 [2024-11-18 20:36:48.410220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.561 [2024-11-18 20:36:48.410238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.561 [2024-11-18 20:36:48.415306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.561 [2024-11-18 20:36:48.415337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.561 [2024-11-18 20:36:48.415355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.561 [2024-11-18 20:36:48.418845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.561 [2024-11-18 20:36:48.418876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.561 [2024-11-18 20:36:48.418894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.561 [2024-11-18 20:36:48.422873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.561 [2024-11-18 20:36:48.422904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.561 [2024-11-18 20:36:48.422921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.561 [2024-11-18 20:36:48.427882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.561 [2024-11-18 20:36:48.427912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.561 [2024-11-18 20:36:48.427942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.561 [2024-11-18 20:36:48.431260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.561 [2024-11-18 20:36:48.431289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.561 [2024-11-18 20:36:48.431322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.561 [2024-11-18 20:36:48.435093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.561 [2024-11-18 20:36:48.435137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.561 [2024-11-18 20:36:48.435154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.561 [2024-11-18 20:36:48.440034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.561 [2024-11-18 20:36:48.440065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.561 [2024-11-18 20:36:48.440097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.561 [2024-11-18 20:36:48.445154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.561 [2024-11-18 20:36:48.445182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.561 [2024-11-18 20:36:48.445199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.561 [2024-11-18 20:36:48.450353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.450383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.450415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.455728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.455771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.455793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.460974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.461004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.461022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.466020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.466048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.466063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.471030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.471061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.471094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.476370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.476400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.476433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.481670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.481715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.481733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.486811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.486842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.486860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.492281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.492327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.492345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.498501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.498533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.498551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.506087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.506123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.506157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.512343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.512391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.512409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.518313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.518344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.518361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.524287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.524318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.524335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.530274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.530305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.530323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.535948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.535977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.535994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.540993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.541023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.541041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.546131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.546162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.546180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.551412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.551458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.551475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.557288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.557319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.557336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.562 [2024-11-18 20:36:48.562582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.562 [2024-11-18 20:36:48.562613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.562 [2024-11-18 20:36:48.562653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.568072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.568102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.568119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.573237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.573280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.573297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.578295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.578325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.578342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.583379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.583409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.583426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.588470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.588499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.588516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.593516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.593547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.593564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.598866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.598897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.598920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.604255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.604286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.604303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.609365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.609395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.609413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.615519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.615551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.615569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.619664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.619693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.619726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.626540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.626587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.626605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.632296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.632327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.632345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.638874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.638905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.638922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.645140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.645172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.645190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.651510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.651557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.651573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.658662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.658712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.658730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.666041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.666071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.666102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.672904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.672960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.672978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.679318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.679364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.679382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.686258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.686290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.686308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.692235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.692281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.692298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.697426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.697455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.697488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.702554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.702584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.702606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.707995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.708034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.708066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.713616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.822 [2024-11-18 20:36:48.713656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.822 [2024-11-18 20:36:48.713685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.822 [2024-11-18 20:36:48.719201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.719230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.719246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.724178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.724209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.724241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.729126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.729171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.729189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.734586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.734616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.734658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.740143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.740191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.740207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.745432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.745480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.745498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.750714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.750750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.750783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.754171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.754201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.754219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.759583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.759627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.759666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.765307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.765350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.765367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.771635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.771689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.771707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.777006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.777052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.777069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.783604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.783660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.783690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.791760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.791807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.791825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.797987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.798017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.798049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.803665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.803693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.803708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.808917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.808963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.808980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.814152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.814195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.814212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.819258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.819288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.819304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:36.823 [2024-11-18 20:36:48.824325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:36.823 [2024-11-18 20:36:48.824356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:36.823 [2024-11-18 20:36:48.824374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.829549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.829578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.829596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.834735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.834765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.834782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.839815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.839845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.839862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.845008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.845038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.845062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.850186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.850214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.850245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.855273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.855302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.855318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.860363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.860392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.860424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.865964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.865994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.866011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.871123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.871154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.871171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.876462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.876491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.876523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.882142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.882187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.882203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.888255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.888285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.888317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.893545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.893580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.893614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.899542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.899573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.899607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.906819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.906850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.906883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.913753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.913785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.913803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.921448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.086 [2024-11-18 20:36:48.921479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.086 [2024-11-18 20:36:48.921512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.086 [2024-11-18 20:36:48.929955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:48.929986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:48.930004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:48.937191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:48.937221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:48.937254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:48.945590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:48.945621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:48.945646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:48.953797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:48.953829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:48.953847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:48.960994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:48.961026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:48.961059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:48.969262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:48.969293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:48.969311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:48.976962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:48.976993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:48.977011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:48.985008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:48.985039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:48.985057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:48.992792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:48.992824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:48.992842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:48.999990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.000023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.000042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.007256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.007288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.007307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.014854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.014900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.014917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.022452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.022484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.022507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.029736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.029766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.029799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.035240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.035271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.035289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.038165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.038193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.038210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.043361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.043389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.043405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.048183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.048212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.048229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.053189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.053232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.053249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.058189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.058219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.058251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.063315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.063343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.063360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.068406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.068440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.068472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.073955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.073984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.074000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.078968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.079013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.079030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.087 [2024-11-18 20:36:49.084235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.087 [2024-11-18 20:36:49.084264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.087 [2024-11-18 20:36:49.084281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.360 [2024-11-18 20:36:49.090126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.360 [2024-11-18 20:36:49.090157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.360 [2024-11-18 20:36:49.090189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.360 [2024-11-18 20:36:49.096469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.360 [2024-11-18 20:36:49.096516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.360 [2024-11-18 20:36:49.096533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.360 [2024-11-18 20:36:49.101808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.360 [2024-11-18 20:36:49.101839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.360 [2024-11-18 20:36:49.101857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.107339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.107370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.107388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.112634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.112690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.112708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.118220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.118251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.118269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.123368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.123399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.123416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.128730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.128761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.128778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.134709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.134741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.134758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.138684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.138714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.138747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.144329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.144374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.144392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.150161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.150192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.150224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.156196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.156226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.156258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.160613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.160666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.160693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.165708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.165754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.165772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.170592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.170644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.170678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.175822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.175851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.175869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.180889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.180917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.180947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.186005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.186032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.186063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.191044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.191088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.191106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.196286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.196314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.196345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.201457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.201484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.201499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.206700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.206743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.206759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.212121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.212162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.212178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.217787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.217834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.217851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.223785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.223815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.223848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.228895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.228937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.228953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.233914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.233958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.233975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.239074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.239100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.239115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.244300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.244330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.244362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.249357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.249385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.249421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.254471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.254499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.254531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.259660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.361 [2024-11-18 20:36:49.259703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.361 [2024-11-18 20:36:49.259720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.361 [2024-11-18 20:36:49.265308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.265335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.265366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.362 [2024-11-18 20:36:49.272815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.272845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.272877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.362 [2024-11-18 20:36:49.279484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.279514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.279545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.362 [2024-11-18 20:36:49.285761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.285793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.285811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.362 [2024-11-18 20:36:49.292044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.292089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.292106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.362 [2024-11-18 20:36:49.298760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.298791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.298809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.362 [2024-11-18 20:36:49.306822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.306859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.306892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.362 [2024-11-18 20:36:49.313983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.314012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.314044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.362 [2024-11-18 20:36:49.319060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.319105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.319122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.362 [2024-11-18 20:36:49.324368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.324398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.324415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.362 [2024-11-18 20:36:49.329941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.329986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.330004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.362 [2024-11-18 20:36:49.337233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.337263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.337295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.362 [2024-11-18 20:36:49.343872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.343902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.343919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.362 [2024-11-18 20:36:49.349717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.349746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.349778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.362 [2024-11-18 20:36:49.355269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.362 [2024-11-18 20:36:49.355300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.362 [2024-11-18 20:36:49.355318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.668 [2024-11-18 20:36:49.362324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.668 [2024-11-18 20:36:49.362355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.668 [2024-11-18 20:36:49.362373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.668 [2024-11-18 20:36:49.370290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.668 [2024-11-18 20:36:49.370321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.668 [2024-11-18 20:36:49.370338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.668 5356.00 IOPS, 669.50 MiB/s [2024-11-18T19:36:49.676Z] [2024-11-18 20:36:49.379443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.668 [2024-11-18 20:36:49.379475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.668 [2024-11-18 20:36:49.379492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.668 [2024-11-18 20:36:49.387173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.668 [2024-11-18 20:36:49.387203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.668 [2024-11-18 20:36:49.387237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.668 [2024-11-18 20:36:49.395507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.668 [2024-11-18 20:36:49.395538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.668 [2024-11-18 20:36:49.395570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.668 [2024-11-18 20:36:49.403390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.668 [2024-11-18 20:36:49.403422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.668 [2024-11-18 20:36:49.403440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.668 [2024-11-18 20:36:49.409415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.668 [2024-11-18 20:36:49.409448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.668 [2024-11-18 20:36:49.409465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.668 [2024-11-18 20:36:49.414483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.668 [2024-11-18 20:36:49.414515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.668 [2024-11-18 20:36:49.414533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.668 [2024-11-18 20:36:49.419597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.668 [2024-11-18 20:36:49.419627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.668 [2024-11-18 20:36:49.419661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.668 [2024-11-18 20:36:49.424573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.668 [2024-11-18 20:36:49.424603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.668 [2024-11-18 20:36:49.424620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.668 [2024-11-18 20:36:49.429697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.668 [2024-11-18 20:36:49.429726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.668 [2024-11-18 20:36:49.429759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.668 [2024-11-18 20:36:49.434909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.668 [2024-11-18 20:36:49.434939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.668 [2024-11-18 20:36:49.434956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.668 [2024-11-18 20:36:49.440205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.668 [2024-11-18 20:36:49.440251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.668 [2024-11-18 20:36:49.440268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.668 [2024-11-18 20:36:49.445764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.445794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.445826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.450533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.450563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.450580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.455509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.455538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.455555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.460472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.460502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.460519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.465524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.465559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.465577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.470668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.470703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.470720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.475960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.475990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.476006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.481314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.481344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.481362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.486202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.486230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.486263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.491251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.491281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.491297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.496227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.496257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.496273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.501314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.501359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.501376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.506411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.506455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.506472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.511668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.511723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.511742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.516947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.516979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.516996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.522227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.522258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.522276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.527628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.527667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.527685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.533874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.533906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.533938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.538416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.538446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.538463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.543417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.543448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.543466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.551042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.551074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.551092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.557520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.557551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.557575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.669 [2024-11-18 20:36:49.563352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.669 [2024-11-18 20:36:49.563399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.669 [2024-11-18 20:36:49.563416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.670 [2024-11-18 20:36:49.569180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.670 [2024-11-18 20:36:49.569211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.670 [2024-11-18 20:36:49.569245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.670 [2024-11-18 20:36:49.574710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.670 [2024-11-18 20:36:49.574740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.670 [2024-11-18 20:36:49.574772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.670 [2024-11-18 20:36:49.580464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.670 [2024-11-18 20:36:49.580509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.670 [2024-11-18 20:36:49.580527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.670 [2024-11-18 20:36:49.587963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.670 [2024-11-18 20:36:49.587996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.670 [2024-11-18 20:36:49.588014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.670 [2024-11-18 20:36:49.594654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.670 [2024-11-18 20:36:49.594691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.670 [2024-11-18 20:36:49.594724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.670 [2024-11-18 20:36:49.601693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.670 [2024-11-18 20:36:49.601724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.670 [2024-11-18 20:36:49.601742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.670 [2024-11-18 20:36:49.607175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.670 [2024-11-18 20:36:49.607208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.670 [2024-11-18 20:36:49.607226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.670 [2024-11-18 20:36:49.612558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.670 [2024-11-18 20:36:49.612610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.670 [2024-11-18 20:36:49.612628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.670 [2024-11-18 20:36:49.616185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.670 [2024-11-18 20:36:49.616215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.670 [2024-11-18 20:36:49.616248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.670 [2024-11-18 20:36:49.620529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.670 [2024-11-18 20:36:49.620572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.670 [2024-11-18 20:36:49.620589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.670 [2024-11-18 20:36:49.625908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.670 [2024-11-18 20:36:49.625938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.670 [2024-11-18 20:36:49.625970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.670 [2024-11-18 20:36:49.632000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.670 [2024-11-18 20:36:49.632033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.670 [2024-11-18 20:36:49.632050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.956 [2024-11-18 20:36:49.638148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.956 [2024-11-18 20:36:49.638178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.956 [2024-11-18 20:36:49.638195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.956 [2024-11-18 20:36:49.644515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.956 [2024-11-18 20:36:49.644546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.956 [2024-11-18 20:36:49.644563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.956 [2024-11-18 20:36:49.650386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.956 [2024-11-18 20:36:49.650416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.956 [2024-11-18 20:36:49.650448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.956 [2024-11-18 20:36:49.657011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.956 [2024-11-18 20:36:49.657056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.657072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.663712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.663742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.663759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.670304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.670334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.670366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.676348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.676378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.676410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.682897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.682946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.682963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.688714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.688744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.688762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.694343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.694372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.694405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.700078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.700109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.700126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.705428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.705458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.705476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.711239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.711270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.711294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.716681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.716714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.716747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.722291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.722322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.722355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.728205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.728256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.728275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.733687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.733719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.733736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.739180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.739211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.739229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.744769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.744801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.744818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.749486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.749531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.749549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.753246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.753276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.753295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.758733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.758764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.758781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.764087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.764118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.764135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.769612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.769666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.769684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.775193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.775238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.775254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.780563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.780608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.780624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.785964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.785993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.957 [2024-11-18 20:36:49.786024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.957 [2024-11-18 20:36:49.791087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.957 [2024-11-18 20:36:49.791117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.791150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.796415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.796445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.796464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.801776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.801808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.801831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.807053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.807083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.807114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.812252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.812283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.812317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.818246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.818278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.818312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.824192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.824223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.824241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.829661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.829693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.829710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.834995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.835041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.835058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.840412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.840443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.840460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.845611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.845674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.845693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.850810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.850847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.850865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.856138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.856168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.856186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.861314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.861349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.861367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.866655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.866700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.866717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.872120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.872150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.872183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.877634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.877689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.877707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.883351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.883383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.883400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.888273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.888303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.888336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.891489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.891518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.891550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.896818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.896848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.896865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.902263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.902291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.902322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.908250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.908280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.908312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.913885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.958 [2024-11-18 20:36:49.913915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.958 [2024-11-18 20:36:49.913947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.958 [2024-11-18 20:36:49.919312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.959 [2024-11-18 20:36:49.919357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.959 [2024-11-18 20:36:49.919373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.959 [2024-11-18 20:36:49.924905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.959 [2024-11-18 20:36:49.924935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.959 [2024-11-18 20:36:49.924956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.959 [2024-11-18 20:36:49.930056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.959 [2024-11-18 20:36:49.930087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.959 [2024-11-18 20:36:49.930104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.959 [2024-11-18 20:36:49.935199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.959 [2024-11-18 20:36:49.935229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.959 [2024-11-18 20:36:49.935262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.959 [2024-11-18 20:36:49.938883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.959 [2024-11-18 20:36:49.938914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.959 [2024-11-18 20:36:49.938941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:37.959 [2024-11-18 20:36:49.943159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.959 [2024-11-18 20:36:49.943190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.959 [2024-11-18 20:36:49.943222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:37.959 [2024-11-18 20:36:49.948297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.959 [2024-11-18 20:36:49.948327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.959 [2024-11-18 20:36:49.948360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:37.959 [2024-11-18 20:36:49.953951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.959 [2024-11-18 20:36:49.953994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.959 [2024-11-18 20:36:49.954011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:37.959 [2024-11-18 20:36:49.959751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:37.959 [2024-11-18 20:36:49.959781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.959 [2024-11-18 20:36:49.959814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.219 [2024-11-18 20:36:49.965787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.219 [2024-11-18 20:36:49.965817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.219 [2024-11-18 20:36:49.965850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.219 [2024-11-18 20:36:49.970975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.219 [2024-11-18 20:36:49.971020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.219 [2024-11-18 20:36:49.971038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.219 [2024-11-18 20:36:49.975999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.219 [2024-11-18 20:36:49.976029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.219 [2024-11-18 20:36:49.976063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.219 [2024-11-18 20:36:49.981577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.219 [2024-11-18 20:36:49.981622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.219 [2024-11-18 20:36:49.981646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.219 [2024-11-18 20:36:49.986359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.219 [2024-11-18 20:36:49.986393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.219 [2024-11-18 20:36:49.986426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.219 [2024-11-18 20:36:49.991486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.219 [2024-11-18 20:36:49.991515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.219 [2024-11-18 20:36:49.991532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.219 [2024-11-18 20:36:49.996532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.219 [2024-11-18 20:36:49.996561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.219 [2024-11-18 20:36:49.996578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.219 [2024-11-18 20:36:50.001464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.219 [2024-11-18 20:36:50.001494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.219 [2024-11-18 20:36:50.001511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.219 [2024-11-18 20:36:50.006592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.219 [2024-11-18 20:36:50.006622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.219 [2024-11-18 20:36:50.006653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.219 [2024-11-18 20:36:50.011778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.219 [2024-11-18 20:36:50.011813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.011831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.016796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.016828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.016845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.021857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.021888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.021906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.027100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.027130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.027148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.032268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.032298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.032316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.037580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.037611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.037628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.043142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.043175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.043192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.048906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.048945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.048962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.054741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.054773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.054791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.060724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.060755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.060772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.066732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.066762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.066778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.072836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.072866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.072884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.078266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.078298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.078323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.083700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.083731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.083749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.088973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.089003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.089035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.094063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.094094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.094112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.099742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.099773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.099790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.105462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.105492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.105525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.109422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.109450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.109482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.116133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.116177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.116195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.121214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.121258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.121276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.220 [2024-11-18 20:36:50.126628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.220 [2024-11-18 20:36:50.126686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.220 [2024-11-18 20:36:50.126704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.131109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.131137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.131170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.136131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.136161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.136178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.141324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.141364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.141397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.146397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.146441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.146458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.151601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.151629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.151670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.156883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.156911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.156943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.162122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.162151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.162167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.167209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.167237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.167269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.170312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.170340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.170357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.175429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.175459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.175493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.181624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.181682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.181701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.187512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.187544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.187562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.193306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.193337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.193355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.199489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.199535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.199552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.205291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.205321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.205354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.210626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.210663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.210681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.216330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.216362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.216392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.221 [2024-11-18 20:36:50.221839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.221 [2024-11-18 20:36:50.221871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.221 [2024-11-18 20:36:50.221888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.227338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.227369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.227387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.232980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.233025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.233042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.238563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.238593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.238609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.243724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.243753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.243785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.248831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.248860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.248877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.254104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.254133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.254165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.259205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.259233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.259265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.264334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.264363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.264397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.269533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.269563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.269580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.274652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.274682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.274699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.279932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.279976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.279993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.285228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.285272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.285288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.290386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.290416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.290448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.295688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.295718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.295735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.300931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.300961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.300978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.481 [2024-11-18 20:36:50.306283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.481 [2024-11-18 20:36:50.306328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.481 [2024-11-18 20:36:50.306352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.482 [2024-11-18 20:36:50.311712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.482 [2024-11-18 20:36:50.311743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.482 [2024-11-18 20:36:50.311760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.482 [2024-11-18 20:36:50.317143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.482 [2024-11-18 20:36:50.317173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.482 [2024-11-18 20:36:50.317205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.482 [2024-11-18 20:36:50.322559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.482 [2024-11-18 20:36:50.322590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.482 [2024-11-18 20:36:50.322607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.482 [2024-11-18 20:36:50.328687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.482 [2024-11-18 20:36:50.328717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.482 [2024-11-18 20:36:50.328751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.482 [2024-11-18 20:36:50.334384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.482 [2024-11-18 20:36:50.334415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.482 [2024-11-18 20:36:50.334448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.482 [2024-11-18 20:36:50.340135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.482 [2024-11-18 20:36:50.340165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.482 [2024-11-18 20:36:50.340197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.482 [2024-11-18 20:36:50.346284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.482 [2024-11-18 20:36:50.346313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.482 [2024-11-18 20:36:50.346346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.482 [2024-11-18 20:36:50.352048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.482 [2024-11-18 20:36:50.352094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.482 [2024-11-18 20:36:50.352111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.482 [2024-11-18 20:36:50.358136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.482 [2024-11-18 20:36:50.358185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.482 [2024-11-18 20:36:50.358202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.482 [2024-11-18 20:36:50.365344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.482 [2024-11-18 20:36:50.365375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.482 [2024-11-18 20:36:50.365393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.482 [2024-11-18 20:36:50.372998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdbb920) 00:35:38.482 [2024-11-18 20:36:50.373028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.482 [2024-11-18 20:36:50.373061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.482 5500.00 IOPS, 687.50 MiB/s 00:35:38.482 Latency(us) 00:35:38.482 [2024-11-18T19:36:50.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.482 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:38.482 nvme0n1 : 2.04 5393.03 674.13 0.00 0.00 2907.58 673.56 44661.57 00:35:38.482 [2024-11-18T19:36:50.490Z] =================================================================================================================== 00:35:38.482 [2024-11-18T19:36:50.490Z] Total : 5393.03 674.13 0.00 0.00 2907.58 673.56 44661.57 00:35:38.482 { 00:35:38.482 "results": [ 00:35:38.482 { 00:35:38.482 "job": "nvme0n1", 00:35:38.482 "core_mask": "0x2", 00:35:38.482 "workload": "randread", 00:35:38.482 "status": "finished", 00:35:38.482 "queue_depth": 16, 00:35:38.482 "io_size": 131072, 00:35:38.482 "runtime": 2.042638, 00:35:38.482 "iops": 5393.026077063092, 00:35:38.482 "mibps": 674.1282596328865, 00:35:38.482 "io_failed": 0, 00:35:38.482 "io_timeout": 0, 00:35:38.482 "avg_latency_us": 2907.579818714866, 00:35:38.482 "min_latency_us": 673.5644444444445, 00:35:38.482 "max_latency_us": 44661.57037037037 00:35:38.482 } 00:35:38.482 ], 00:35:38.482 "core_count": 1 00:35:38.482 } 00:35:38.482 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:38.482 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:38.482 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:38.482 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:38.482 | .driver_specific 00:35:38.482 | .nvme_error 00:35:38.482 | .status_code 00:35:38.482 | .command_transient_transport_error' 00:35:38.740 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 355 > 0 )) 00:35:38.740 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 397065 00:35:38.740 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 397065 ']' 00:35:38.740 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 397065 00:35:38.740 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:38.740 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:38.740 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 397065 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 397065' 00:35:38.999 killing process with pid 397065 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 397065 00:35:38.999 Received shutdown signal, test time was about 2.000000 seconds 00:35:38.999 00:35:38.999 Latency(us) 00:35:38.999 [2024-11-18T19:36:51.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.999 [2024-11-18T19:36:51.007Z] =================================================================================================================== 00:35:38.999 [2024-11-18T19:36:51.007Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 397065 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=397494 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 397494 /var/tmp/bperf.sock 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 397494 ']' 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:38.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:38.999 20:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:38.999 [2024-11-18 20:36:50.982151] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:38.999 [2024-11-18 20:36:50.982233] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397494 ] 00:35:39.258 [2024-11-18 20:36:51.053445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.258 [2024-11-18 20:36:51.099871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:39.258 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:39.258 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:39.258 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:39.258 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:39.515 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:39.515 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.515 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:39.515 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.515 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:39.516 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:40.082 nvme0n1 00:35:40.082 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:40.082 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.082 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:40.082 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.082 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:40.082 20:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:40.082 Running I/O for 2 seconds... 00:35:40.082 [2024-11-18 20:36:51.950748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.082 [2024-11-18 20:36:51.951948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.082 [2024-11-18 20:36:51.952002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:40.082 [2024-11-18 20:36:51.962734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df550 00:35:40.082 [2024-11-18 20:36:51.964005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.082 [2024-11-18 20:36:51.964034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:40.082 [2024-11-18 20:36:51.975361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166ff3c8 00:35:40.082 [2024-11-18 20:36:51.976787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.082 [2024-11-18 20:36:51.976835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:40.082 [2024-11-18 20:36:51.987900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166ebfd0 00:35:40.082 [2024-11-18 20:36:51.989488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.082 [2024-11-18 20:36:51.989533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:40.082 [2024-11-18 20:36:51.998700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166fe2e8 00:35:40.082 [2024-11-18 20:36:52.000526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.082 [2024-11-18 20:36:52.000557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:40.082 [2024-11-18 20:36:52.008951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df550 00:35:40.082 [2024-11-18 20:36:52.009807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.082 [2024-11-18 20:36:52.009853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:40.082 [2024-11-18 20:36:52.021398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166ea248 00:35:40.082 [2024-11-18 20:36:52.022398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.082 [2024-11-18 20:36:52.022447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:40.082 [2024-11-18 20:36:52.033476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166f4298 00:35:40.082 [2024-11-18 20:36:52.034609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.082 [2024-11-18 20:36:52.034648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:40.082 [2024-11-18 20:36:52.045225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166e4de8 00:35:40.082 [2024-11-18 20:36:52.046105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.082 [2024-11-18 20:36:52.046162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:40.083 [2024-11-18 20:36:52.060318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166fdeb0 00:35:40.083 [2024-11-18 20:36:52.062079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.083 [2024-11-18 20:36:52.062139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:40.083 [2024-11-18 20:36:52.071588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df550 00:35:40.083 [2024-11-18 20:36:52.073340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.083 [2024-11-18 20:36:52.073389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:40.083 [2024-11-18 20:36:52.083091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166f0350 00:35:40.083 [2024-11-18 20:36:52.084650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.083 [2024-11-18 20:36:52.084695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:40.341 [2024-11-18 20:36:52.095480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166eb328 00:35:40.341 [2024-11-18 20:36:52.097104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.341 [2024-11-18 20:36:52.097162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:40.341 [2024-11-18 20:36:52.105070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166e6b70 00:35:40.341 [2024-11-18 20:36:52.106117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.341 [2024-11-18 20:36:52.106167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.119809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166efae0 00:35:40.342 [2024-11-18 20:36:52.121444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.121494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.129303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166efae0 00:35:40.342 [2024-11-18 20:36:52.130290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.130340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.141514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166e38d0 00:35:40.342 [2024-11-18 20:36:52.142513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.142563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.152960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.342 [2024-11-18 20:36:52.153866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.153904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.167073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166ea248 00:35:40.342 [2024-11-18 20:36:52.168175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.168222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.178430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166e88f8 00:35:40.342 [2024-11-18 20:36:52.179817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.179849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.190303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166eee38 00:35:40.342 [2024-11-18 20:36:52.191621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.191679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.202179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166e6300 00:35:40.342 [2024-11-18 20:36:52.203121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.203170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.213397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166eff18 00:35:40.342 [2024-11-18 20:36:52.215121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.215157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.225826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166ddc00 00:35:40.342 [2024-11-18 20:36:52.226827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.226861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.236845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166fe2e8 00:35:40.342 [2024-11-18 20:36:52.238498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.238535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.248700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166eee38 00:35:40.342 [2024-11-18 20:36:52.250073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.250103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.260567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.342 [2024-11-18 20:36:52.261582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.261632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.272236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166eb328 00:35:40.342 [2024-11-18 20:36:52.273378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.273422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.284052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166ef6a8 00:35:40.342 [2024-11-18 20:36:52.284815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.284846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.296414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166de470 00:35:40.342 [2024-11-18 20:36:52.297368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.297413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.307574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166f9b30 00:35:40.342 [2024-11-18 20:36:52.309274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.309306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.317833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166e6738 00:35:40.342 [2024-11-18 20:36:52.318544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.318571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.330291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166dfdc0 00:35:40.342 [2024-11-18 20:36:52.331159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.331187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:40.342 [2024-11-18 20:36:52.342782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166ef6a8 00:35:40.342 [2024-11-18 20:36:52.343787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.342 [2024-11-18 20:36:52.343830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:40.601 [2024-11-18 20:36:52.355482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166f7538 00:35:40.601 [2024-11-18 20:36:52.356630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.601 [2024-11-18 20:36:52.356682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:40.601 [2024-11-18 20:36:52.369825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166eea00 00:35:40.601 [2024-11-18 20:36:52.371548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.601 [2024-11-18 20:36:52.371595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:40.601 [2024-11-18 20:36:52.378181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166e0a68 00:35:40.601 [2024-11-18 20:36:52.378937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.601 [2024-11-18 20:36:52.378988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:40.601 [2024-11-18 20:36:52.390443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166e6300 00:35:40.601 [2024-11-18 20:36:52.391249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.601 [2024-11-18 20:36:52.391298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:40.601 [2024-11-18 20:36:52.404013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166ef270 00:35:40.601 [2024-11-18 20:36:52.405348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.601 [2024-11-18 20:36:52.405397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:40.601 [2024-11-18 20:36:52.415158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166f7970 00:35:40.601 [2024-11-18 20:36:52.416242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.601 [2024-11-18 20:36:52.416275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:40.601 [2024-11-18 20:36:52.426748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166f2510 00:35:40.601 [2024-11-18 20:36:52.427667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.601 [2024-11-18 20:36:52.427712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:40.601 [2024-11-18 20:36:52.437906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166dfdc0 00:35:40.601 [2024-11-18 20:36:52.438648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.601 [2024-11-18 20:36:52.438699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:40.601 [2024-11-18 20:36:52.450153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166fc560 00:35:40.601 [2024-11-18 20:36:52.451121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.601 [2024-11-18 20:36:52.451173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:40.601 [2024-11-18 20:36:52.461977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166fc128 00:35:40.601 [2024-11-18 20:36:52.463077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.601 [2024-11-18 20:36:52.463128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:40.601 [2024-11-18 20:36:52.473978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166f1868 00:35:40.601 [2024-11-18 20:36:52.474682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.601 [2024-11-18 20:36:52.474729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:40.601 [2024-11-18 20:36:52.488068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166ec408 00:35:40.601 [2024-11-18 20:36:52.489708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.601 [2024-11-18 20:36:52.489742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:40.602 [2024-11-18 20:36:52.500350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166fe720 00:35:40.602 [2024-11-18 20:36:52.502198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.602 [2024-11-18 20:36:52.502248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:40.602 [2024-11-18 20:36:52.508770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166e1710 00:35:40.602 [2024-11-18 20:36:52.509733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.602 [2024-11-18 20:36:52.509765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:40.602 [2024-11-18 20:36:52.521303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166f2510 00:35:40.602 [2024-11-18 20:36:52.522386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.602 [2024-11-18 20:36:52.522433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:40.602 [2024-11-18 20:36:52.533345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166e8d30 00:35:40.602 [2024-11-18 20:36:52.534573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.602 [2024-11-18 20:36:52.534605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:40.602 [2024-11-18 20:36:52.545186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166fe2e8 00:35:40.602 [2024-11-18 20:36:52.546339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.602 [2024-11-18 20:36:52.546387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:40.602 [2024-11-18 20:36:52.557187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166efae0 00:35:40.602 [2024-11-18 20:36:52.558367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.602 [2024-11-18 20:36:52.558416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:40.602 [2024-11-18 20:36:52.571259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166fda78 00:35:40.602 [2024-11-18 20:36:52.572951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.602 [2024-11-18 20:36:52.573000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:40.602 [2024-11-18 20:36:52.579653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166f57b0 00:35:40.602 [2024-11-18 20:36:52.580524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.602 [2024-11-18 20:36:52.580569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:40.602 [2024-11-18 20:36:52.594092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166eb328 00:35:40.602 [2024-11-18 20:36:52.595460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.602 [2024-11-18 20:36:52.595509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:40.602 [2024-11-18 20:36:52.605335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166ee190 00:35:40.602 [2024-11-18 20:36:52.606633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.602 [2024-11-18 20:36:52.606690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:40.860 [2024-11-18 20:36:52.616760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166eea00 00:35:40.861 [2024-11-18 20:36:52.617851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.617897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.630303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166ff3c8 00:35:40.861 [2024-11-18 20:36:52.631964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.632013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.639594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166ecc78 00:35:40.861 [2024-11-18 20:36:52.640748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.640799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.653493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.653772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.653803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.667413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.667684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.667714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.681396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.681679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.681711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.695324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.695576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.695606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.709207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.709462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.709493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.723046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.723319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.723361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.737196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.737453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.737486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.751178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.751423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.751470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.765237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.765495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.765528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.779199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.779439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.779467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.793201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.793471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.793514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.807062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.807330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.807372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.820957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.821202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.821250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.834810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.835051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.835084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.848808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.849052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.849084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:40.861 [2024-11-18 20:36:52.862760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:40.861 [2024-11-18 20:36:52.863003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.861 [2024-11-18 20:36:52.863034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.120 [2024-11-18 20:36:52.877138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.120 [2024-11-18 20:36:52.877395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.120 [2024-11-18 20:36:52.877425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.120 [2024-11-18 20:36:52.891146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.120 [2024-11-18 20:36:52.891395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.120 [2024-11-18 20:36:52.891427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.120 [2024-11-18 20:36:52.905101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.120 [2024-11-18 20:36:52.905333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.120 [2024-11-18 20:36:52.905365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.120 [2024-11-18 20:36:52.919225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.120 [2024-11-18 20:36:52.919494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.120 [2024-11-18 20:36:52.919525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.120 [2024-11-18 20:36:52.933327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.120 [2024-11-18 20:36:52.933582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.120 [2024-11-18 20:36:52.933609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.120 20377.00 IOPS, 79.60 MiB/s [2024-11-18T19:36:53.128Z] [2024-11-18 20:36:52.947284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.120 [2024-11-18 20:36:52.947548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.120 [2024-11-18 20:36:52.947579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.120 [2024-11-18 20:36:52.961132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.120 [2024-11-18 20:36:52.961402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.120 [2024-11-18 20:36:52.961431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.120 [2024-11-18 20:36:52.974968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.120 [2024-11-18 20:36:52.975207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.120 [2024-11-18 20:36:52.975239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.120 [2024-11-18 20:36:52.988969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.120 [2024-11-18 20:36:52.989250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.120 [2024-11-18 20:36:52.989281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.120 [2024-11-18 20:36:53.003116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.120 [2024-11-18 20:36:53.003375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.120 [2024-11-18 20:36:53.003408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.120 [2024-11-18 20:36:53.017082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.120 [2024-11-18 20:36:53.017352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.120 [2024-11-18 20:36:53.017385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.120 [2024-11-18 20:36:53.031026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.120 [2024-11-18 20:36:53.031295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.121 [2024-11-18 20:36:53.031326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.121 [2024-11-18 20:36:53.045158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.121 [2024-11-18 20:36:53.045414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.121 [2024-11-18 20:36:53.045446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.121 [2024-11-18 20:36:53.059203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.121 [2024-11-18 20:36:53.059457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.121 [2024-11-18 20:36:53.059489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.121 [2024-11-18 20:36:53.073339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.121 [2024-11-18 20:36:53.073595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.121 [2024-11-18 20:36:53.073623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.121 [2024-11-18 20:36:53.087429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.121 [2024-11-18 20:36:53.087711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.121 [2024-11-18 20:36:53.087761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.121 [2024-11-18 20:36:53.101376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.121 [2024-11-18 20:36:53.101655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.121 [2024-11-18 20:36:53.101698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.121 [2024-11-18 20:36:53.115421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.121 [2024-11-18 20:36:53.115696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.121 [2024-11-18 20:36:53.115743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.379 [2024-11-18 20:36:53.129669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.379 [2024-11-18 20:36:53.129931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.379 [2024-11-18 20:36:53.129960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.379 [2024-11-18 20:36:53.143813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.379 [2024-11-18 20:36:53.144060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.379 [2024-11-18 20:36:53.144090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.379 [2024-11-18 20:36:53.157826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.379 [2024-11-18 20:36:53.158056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.379 [2024-11-18 20:36:53.158087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.379 [2024-11-18 20:36:53.171770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.379 [2024-11-18 20:36:53.172016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.379 [2024-11-18 20:36:53.172063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.379 [2024-11-18 20:36:53.185779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.379 [2024-11-18 20:36:53.186019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.379 [2024-11-18 20:36:53.186063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.379 [2024-11-18 20:36:53.199918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.379 [2024-11-18 20:36:53.200164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.379 [2024-11-18 20:36:53.200200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.379 [2024-11-18 20:36:53.213762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.379 [2024-11-18 20:36:53.214026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.379 [2024-11-18 20:36:53.214061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.379 [2024-11-18 20:36:53.227737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.379 [2024-11-18 20:36:53.227972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.379 [2024-11-18 20:36:53.228026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.380 [2024-11-18 20:36:53.241851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.380 [2024-11-18 20:36:53.242092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.380 [2024-11-18 20:36:53.242145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.380 [2024-11-18 20:36:53.255940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.380 [2024-11-18 20:36:53.256190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.380 [2024-11-18 20:36:53.256221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.380 [2024-11-18 20:36:53.269891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.380 [2024-11-18 20:36:53.270131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.380 [2024-11-18 20:36:53.270169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.380 [2024-11-18 20:36:53.283941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.380 [2024-11-18 20:36:53.284195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.380 [2024-11-18 20:36:53.284237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.380 [2024-11-18 20:36:53.297876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.380 [2024-11-18 20:36:53.298118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.380 [2024-11-18 20:36:53.298149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.380 [2024-11-18 20:36:53.311655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.380 [2024-11-18 20:36:53.311897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.380 [2024-11-18 20:36:53.311941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.380 [2024-11-18 20:36:53.325615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.380 [2024-11-18 20:36:53.325870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.380 [2024-11-18 20:36:53.325901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.380 [2024-11-18 20:36:53.339463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.380 [2024-11-18 20:36:53.339742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.380 [2024-11-18 20:36:53.339771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.380 [2024-11-18 20:36:53.353493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.380 [2024-11-18 20:36:53.353784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.380 [2024-11-18 20:36:53.353832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.380 [2024-11-18 20:36:53.367379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.380 [2024-11-18 20:36:53.367653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.380 [2024-11-18 20:36:53.367695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.380 [2024-11-18 20:36:53.381376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.380 [2024-11-18 20:36:53.381650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.380 [2024-11-18 20:36:53.381695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.639 [2024-11-18 20:36:53.395776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.639 [2024-11-18 20:36:53.396008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.639 [2024-11-18 20:36:53.396036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.639 [2024-11-18 20:36:53.409721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.639 [2024-11-18 20:36:53.409965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.639 [2024-11-18 20:36:53.409995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.639 [2024-11-18 20:36:53.423677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.639 [2024-11-18 20:36:53.423936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.639 [2024-11-18 20:36:53.423968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.639 [2024-11-18 20:36:53.437549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.639 [2024-11-18 20:36:53.437820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.639 [2024-11-18 20:36:53.437854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.639 [2024-11-18 20:36:53.451557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.639 [2024-11-18 20:36:53.451808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.639 [2024-11-18 20:36:53.451839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.639 [2024-11-18 20:36:53.465535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.639 [2024-11-18 20:36:53.465761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.639 [2024-11-18 20:36:53.465796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.639 [2024-11-18 20:36:53.479215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.639 [2024-11-18 20:36:53.479474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.639 [2024-11-18 20:36:53.479507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.639 [2024-11-18 20:36:53.493192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.639 [2024-11-18 20:36:53.493438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.639 [2024-11-18 20:36:53.493484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.639 [2024-11-18 20:36:53.507148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.639 [2024-11-18 20:36:53.507406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.639 [2024-11-18 20:36:53.507437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.639 [2024-11-18 20:36:53.521080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.639 [2024-11-18 20:36:53.521330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.639 [2024-11-18 20:36:53.521376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.639 [2024-11-18 20:36:53.535003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.639 [2024-11-18 20:36:53.535254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.639 [2024-11-18 20:36:53.535285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.639 [2024-11-18 20:36:53.549030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.639 [2024-11-18 20:36:53.549275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.639 [2024-11-18 20:36:53.549321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.639 [2024-11-18 20:36:53.563113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.639 [2024-11-18 20:36:53.563377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.639 [2024-11-18 20:36:53.563407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.639 [2024-11-18 20:36:53.576965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.640 [2024-11-18 20:36:53.577209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.640 [2024-11-18 20:36:53.577256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.640 [2024-11-18 20:36:53.590949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.640 [2024-11-18 20:36:53.591201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.640 [2024-11-18 20:36:53.591251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.640 [2024-11-18 20:36:53.604982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.640 [2024-11-18 20:36:53.605242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.640 [2024-11-18 20:36:53.605273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.640 [2024-11-18 20:36:53.619013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.640 [2024-11-18 20:36:53.619282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.640 [2024-11-18 20:36:53.619326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.640 [2024-11-18 20:36:53.633048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.640 [2024-11-18 20:36:53.633312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.640 [2024-11-18 20:36:53.633343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.898 [2024-11-18 20:36:53.647487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.898 [2024-11-18 20:36:53.647769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.898 [2024-11-18 20:36:53.647802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.898 [2024-11-18 20:36:53.661544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.898 [2024-11-18 20:36:53.661795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.898 [2024-11-18 20:36:53.661840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.898 [2024-11-18 20:36:53.675514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.898 [2024-11-18 20:36:53.675784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.898 [2024-11-18 20:36:53.675820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.898 [2024-11-18 20:36:53.689527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.898 [2024-11-18 20:36:53.689781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.898 [2024-11-18 20:36:53.689814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.898 [2024-11-18 20:36:53.703550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.898 [2024-11-18 20:36:53.703791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.898 [2024-11-18 20:36:53.703826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.898 [2024-11-18 20:36:53.717615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.898 [2024-11-18 20:36:53.717876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.898 [2024-11-18 20:36:53.717909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.898 [2024-11-18 20:36:53.731665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.898 [2024-11-18 20:36:53.731896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.899 [2024-11-18 20:36:53.731929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.899 [2024-11-18 20:36:53.745693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.899 [2024-11-18 20:36:53.745940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.899 [2024-11-18 20:36:53.745971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.899 [2024-11-18 20:36:53.759626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.899 [2024-11-18 20:36:53.759893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.899 [2024-11-18 20:36:53.759936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.899 [2024-11-18 20:36:53.773536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.899 [2024-11-18 20:36:53.773794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.899 [2024-11-18 20:36:53.773826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.899 [2024-11-18 20:36:53.787421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.899 [2024-11-18 20:36:53.787696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.899 [2024-11-18 20:36:53.787744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.899 [2024-11-18 20:36:53.801296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.899 [2024-11-18 20:36:53.801562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.899 [2024-11-18 20:36:53.801594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.899 [2024-11-18 20:36:53.815075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.899 [2024-11-18 20:36:53.815346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.899 [2024-11-18 20:36:53.815389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.899 [2024-11-18 20:36:53.829031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.899 [2024-11-18 20:36:53.829259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.899 [2024-11-18 20:36:53.829291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.899 [2024-11-18 20:36:53.842878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.899 [2024-11-18 20:36:53.843133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.899 [2024-11-18 20:36:53.843177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.899 [2024-11-18 20:36:53.856938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.899 [2024-11-18 20:36:53.857205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.899 [2024-11-18 20:36:53.857236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.899 [2024-11-18 20:36:53.870912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.899 [2024-11-18 20:36:53.871166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.899 [2024-11-18 20:36:53.871213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.899 [2024-11-18 20:36:53.884890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.899 [2024-11-18 20:36:53.885141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.899 [2024-11-18 20:36:53.885185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:41.899 [2024-11-18 20:36:53.898819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:41.899 [2024-11-18 20:36:53.899047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.899 [2024-11-18 20:36:53.899079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:42.158 [2024-11-18 20:36:53.913117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:42.158 [2024-11-18 20:36:53.913387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.158 [2024-11-18 20:36:53.913416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:42.158 [2024-11-18 20:36:53.927184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:42.158 [2024-11-18 20:36:53.927457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.158 [2024-11-18 20:36:53.927489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:42.158 19317.00 IOPS, 75.46 MiB/s [2024-11-18T19:36:54.166Z] [2024-11-18 20:36:53.941189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c35460) with pdu=0x2000166df118 00:35:42.158 [2024-11-18 20:36:53.941438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.158 [2024-11-18 20:36:53.941483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:42.158 00:35:42.158 Latency(us) 00:35:42.158 [2024-11-18T19:36:54.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.158 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:42.158 nvme0n1 : 2.01 19315.99 75.45 0.00 0.00 6611.89 2767.08 16019.91 00:35:42.158 [2024-11-18T19:36:54.166Z] =================================================================================================================== 00:35:42.158 [2024-11-18T19:36:54.166Z] Total : 19315.99 75.45 0.00 0.00 6611.89 2767.08 16019.91 00:35:42.158 { 00:35:42.158 "results": [ 00:35:42.158 { 00:35:42.158 "job": "nvme0n1", 00:35:42.158 "core_mask": "0x2", 00:35:42.158 "workload": "randwrite", 00:35:42.158 "status": "finished", 00:35:42.158 "queue_depth": 128, 00:35:42.158 "io_size": 4096, 00:35:42.158 "runtime": 2.006731, 00:35:42.158 "iops": 19315.992028826982, 00:35:42.158 "mibps": 75.4530938626054, 00:35:42.158 "io_failed": 0, 00:35:42.158 "io_timeout": 0, 00:35:42.158 "avg_latency_us": 6611.885500690824, 00:35:42.158 "min_latency_us": 2767.0755555555556, 00:35:42.158 "max_latency_us": 16019.91111111111 00:35:42.158 } 00:35:42.158 ], 00:35:42.158 "core_count": 1 00:35:42.158 } 00:35:42.158 20:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:42.158 20:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:42.158 20:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:42.158 20:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:42.158 | .driver_specific 00:35:42.158 | .nvme_error 00:35:42.158 | .status_code 00:35:42.158 | .command_transient_transport_error' 00:35:42.417 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:35:42.417 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 397494 00:35:42.417 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 397494 ']' 00:35:42.417 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 397494 00:35:42.417 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:42.417 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:42.417 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 397494 00:35:42.417 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:42.417 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:42.417 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 397494' 00:35:42.417 killing process with pid 397494 00:35:42.417 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 397494 00:35:42.417 Received shutdown signal, test time was about 2.000000 seconds 00:35:42.417 00:35:42.417 Latency(us) 00:35:42.417 [2024-11-18T19:36:54.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.417 [2024-11-18T19:36:54.425Z] =================================================================================================================== 00:35:42.417 [2024-11-18T19:36:54.425Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:42.417 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 397494 00:35:42.675 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:42.675 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:42.675 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:42.675 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:42.675 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:42.675 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=398003 00:35:42.675 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:42.675 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 398003 /var/tmp/bperf.sock 00:35:42.675 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 398003 ']' 00:35:42.675 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:42.675 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:42.675 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:42.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:42.675 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:42.675 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:42.675 [2024-11-18 20:36:54.501135] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:42.675 [2024-11-18 20:36:54.501215] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid398003 ] 00:35:42.675 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:42.675 Zero copy mechanism will not be used. 00:35:42.675 [2024-11-18 20:36:54.567955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.675 [2024-11-18 20:36:54.614061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:42.934 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:42.934 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:42.934 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:42.934 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:43.192 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:43.192 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.192 20:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:43.192 20:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.192 20:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:43.192 20:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:43.760 nvme0n1 00:35:43.760 20:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:43.760 20:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.760 20:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:43.760 20:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.760 20:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:43.760 20:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:43.760 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:43.760 Zero copy mechanism will not be used. 00:35:43.760 Running I/O for 2 seconds... 00:35:43.760 [2024-11-18 20:36:55.645404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.645516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.645558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.651789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.651882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.651914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.657452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.657547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.657576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.663087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.663179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.663208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.668486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.668593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.668622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.674482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.674556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.674583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.680341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.680419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.680448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.685763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.685858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.685887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.691162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.691256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.691285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.696327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.696417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.696445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.701575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.701719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.701749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.706832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.706912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.706941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.712174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.712244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.712272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.718322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.718405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.718435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.723468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.723555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.723584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.728657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.728741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.728770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.733718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.733810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.733838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.739454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.760 [2024-11-18 20:36:55.739556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.760 [2024-11-18 20:36:55.739585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.760 [2024-11-18 20:36:55.745195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.761 [2024-11-18 20:36:55.745278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.761 [2024-11-18 20:36:55.745310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.761 [2024-11-18 20:36:55.750363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.761 [2024-11-18 20:36:55.750443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.761 [2024-11-18 20:36:55.750471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.761 [2024-11-18 20:36:55.755401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.761 [2024-11-18 20:36:55.755501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.761 [2024-11-18 20:36:55.755530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.761 [2024-11-18 20:36:55.760514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.761 [2024-11-18 20:36:55.760648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.761 [2024-11-18 20:36:55.760677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.761 [2024-11-18 20:36:55.765655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:43.761 [2024-11-18 20:36:55.765781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.761 [2024-11-18 20:36:55.765809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.020 [2024-11-18 20:36:55.771002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.020 [2024-11-18 20:36:55.771100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.020 [2024-11-18 20:36:55.771128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.020 [2024-11-18 20:36:55.776271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.020 [2024-11-18 20:36:55.776353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.020 [2024-11-18 20:36:55.776381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.020 [2024-11-18 20:36:55.781513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.020 [2024-11-18 20:36:55.781587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.020 [2024-11-18 20:36:55.781620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.020 [2024-11-18 20:36:55.786867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.020 [2024-11-18 20:36:55.786953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.020 [2024-11-18 20:36:55.786982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.020 [2024-11-18 20:36:55.792121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.020 [2024-11-18 20:36:55.792207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.020 [2024-11-18 20:36:55.792236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.020 [2024-11-18 20:36:55.797441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.020 [2024-11-18 20:36:55.797535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.020 [2024-11-18 20:36:55.797566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.020 [2024-11-18 20:36:55.803467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.020 [2024-11-18 20:36:55.803575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.020 [2024-11-18 20:36:55.803603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.020 [2024-11-18 20:36:55.809799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.020 [2024-11-18 20:36:55.809871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.020 [2024-11-18 20:36:55.809898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.020 [2024-11-18 20:36:55.815703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.020 [2024-11-18 20:36:55.815828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.020 [2024-11-18 20:36:55.815857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.020 [2024-11-18 20:36:55.821339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.020 [2024-11-18 20:36:55.821521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.020 [2024-11-18 20:36:55.821550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.020 [2024-11-18 20:36:55.827789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.020 [2024-11-18 20:36:55.827926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.020 [2024-11-18 20:36:55.827955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.833466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.833604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.833632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.839139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.839246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.839275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.844222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.844350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.844378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.850373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.850564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.850593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.857404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.857523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.857552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.863460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.863679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.863708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.869772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.869911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.869940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.876187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.876343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.876372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.882542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.882732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.882761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.888799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.888971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.889000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.895161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.895338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.895367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.901552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.901709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.901738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.907829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.908013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.908041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.914117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.914294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.914323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.920541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.920758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.920787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.926790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.926973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.927002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.933441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.933587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.933631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.939789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.939932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.939966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.946126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.946305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.946334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.951918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.952075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.952104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.957227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.957317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.957345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.963930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.964019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.021 [2024-11-18 20:36:55.964045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.021 [2024-11-18 20:36:55.970238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.021 [2024-11-18 20:36:55.970365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.022 [2024-11-18 20:36:55.970393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.022 [2024-11-18 20:36:55.977255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.022 [2024-11-18 20:36:55.977338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.022 [2024-11-18 20:36:55.977366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.022 [2024-11-18 20:36:55.984159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.022 [2024-11-18 20:36:55.984261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.022 [2024-11-18 20:36:55.984289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.022 [2024-11-18 20:36:55.990844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.022 [2024-11-18 20:36:55.990922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.022 [2024-11-18 20:36:55.990951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.022 [2024-11-18 20:36:55.996476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.022 [2024-11-18 20:36:55.996563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.022 [2024-11-18 20:36:55.996594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.022 [2024-11-18 20:36:56.001717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.022 [2024-11-18 20:36:56.001795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.022 [2024-11-18 20:36:56.001822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.022 [2024-11-18 20:36:56.006944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.022 [2024-11-18 20:36:56.007085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.022 [2024-11-18 20:36:56.007113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.022 [2024-11-18 20:36:56.012758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.022 [2024-11-18 20:36:56.012921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.022 [2024-11-18 20:36:56.012950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.022 [2024-11-18 20:36:56.019197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.022 [2024-11-18 20:36:56.019386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.022 [2024-11-18 20:36:56.019430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.022 [2024-11-18 20:36:56.026148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.022 [2024-11-18 20:36:56.026279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.022 [2024-11-18 20:36:56.026308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.033072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.033151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.033179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.040119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.040231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.040260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.046646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.046758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.046788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.052088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.052194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.052223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.057295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.057388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.057417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.062477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.062569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.062597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.067667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.067830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.067858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.072967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.073070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.073100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.078183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.078274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.078303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.083651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.083828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.083857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.090860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.090994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.091023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.096933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.097072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.097106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.102423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.102554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.102583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.108106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.108213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.108241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.113722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.113806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.113835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.119985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.120059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.120091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.126075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.126162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.126189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.282 [2024-11-18 20:36:56.131862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.282 [2024-11-18 20:36:56.131946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.282 [2024-11-18 20:36:56.131975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.137306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.137404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.137433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.142441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.142530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.142559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.147648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.147765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.147794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.152792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.152884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.152912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.158311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.158392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.158419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.163594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.163688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.163717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.168839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.168954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.168989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.174148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.174239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.174267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.179300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.179404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.179432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.184417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.184507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.184535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.189539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.189635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.189671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.194632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.194730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.194759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.200003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.200104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.200133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.205224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.205312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.205340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.210400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.210491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.210520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.215456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.215552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.215580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.220691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.220791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.220820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.225807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.225888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.225917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.230935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.231043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.231072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.236030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.236139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.236174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.241176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.283 [2024-11-18 20:36:56.241276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.283 [2024-11-18 20:36:56.241305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.283 [2024-11-18 20:36:56.246321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.284 [2024-11-18 20:36:56.246411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.284 [2024-11-18 20:36:56.246439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.284 [2024-11-18 20:36:56.251903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.284 [2024-11-18 20:36:56.251994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.284 [2024-11-18 20:36:56.252022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.284 [2024-11-18 20:36:56.257842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.284 [2024-11-18 20:36:56.257959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.284 [2024-11-18 20:36:56.257987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.284 [2024-11-18 20:36:56.263895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.284 [2024-11-18 20:36:56.263997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.284 [2024-11-18 20:36:56.264026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.284 [2024-11-18 20:36:56.270080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.284 [2024-11-18 20:36:56.270157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.284 [2024-11-18 20:36:56.270184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.284 [2024-11-18 20:36:56.276094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.284 [2024-11-18 20:36:56.276166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.284 [2024-11-18 20:36:56.276193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.284 [2024-11-18 20:36:56.281997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.284 [2024-11-18 20:36:56.282073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.284 [2024-11-18 20:36:56.282101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.284 [2024-11-18 20:36:56.288186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.284 [2024-11-18 20:36:56.288305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.284 [2024-11-18 20:36:56.288334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.544 [2024-11-18 20:36:56.294426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.544 [2024-11-18 20:36:56.294501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.544 [2024-11-18 20:36:56.294531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.544 [2024-11-18 20:36:56.299591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.544 [2024-11-18 20:36:56.299697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.544 [2024-11-18 20:36:56.299726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.544 [2024-11-18 20:36:56.304670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.544 [2024-11-18 20:36:56.304770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.544 [2024-11-18 20:36:56.304799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.544 [2024-11-18 20:36:56.309711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.544 [2024-11-18 20:36:56.309794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.544 [2024-11-18 20:36:56.309823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.544 [2024-11-18 20:36:56.314907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.544 [2024-11-18 20:36:56.314981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.544 [2024-11-18 20:36:56.315008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.544 [2024-11-18 20:36:56.319899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.544 [2024-11-18 20:36:56.319991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.544 [2024-11-18 20:36:56.320020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.544 [2024-11-18 20:36:56.324941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.544 [2024-11-18 20:36:56.325026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.544 [2024-11-18 20:36:56.325054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.544 [2024-11-18 20:36:56.330086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.544 [2024-11-18 20:36:56.330167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.544 [2024-11-18 20:36:56.330196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.544 [2024-11-18 20:36:56.335125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.544 [2024-11-18 20:36:56.335205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.544 [2024-11-18 20:36:56.335233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.544 [2024-11-18 20:36:56.340252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.544 [2024-11-18 20:36:56.340351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.544 [2024-11-18 20:36:56.340379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.544 [2024-11-18 20:36:56.345479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.544 [2024-11-18 20:36:56.345568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.544 [2024-11-18 20:36:56.345596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.544 [2024-11-18 20:36:56.350682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.544 [2024-11-18 20:36:56.350780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.544 [2024-11-18 20:36:56.350809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.544 [2024-11-18 20:36:56.355698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.544 [2024-11-18 20:36:56.355800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.544 [2024-11-18 20:36:56.355827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.544 [2024-11-18 20:36:56.361231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.544 [2024-11-18 20:36:56.361310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.361339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.367232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.367311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.367339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.372297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.372373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.372401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.378675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.378747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.378782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.384801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.384873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.384900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.390826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.390901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.390930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.396737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.396810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.396838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.402690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.402769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.402797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.409183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.409276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.409303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.415293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.415365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.415392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.421303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.421377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.421405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.427460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.427544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.427573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.433653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.433740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.433767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.439648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.439735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.439764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.445645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.445758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.445787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.451744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.451816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.451843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.457469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.457558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.457586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.462501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.462592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.462621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.467696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.467789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.467818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.472828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.472918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.472947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.478031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.478124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.478153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.483299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.483407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.545 [2024-11-18 20:36:56.483435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.545 [2024-11-18 20:36:56.488463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.545 [2024-11-18 20:36:56.488547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.546 [2024-11-18 20:36:56.488574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.546 [2024-11-18 20:36:56.493609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.546 [2024-11-18 20:36:56.493723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.546 [2024-11-18 20:36:56.493752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.546 [2024-11-18 20:36:56.498801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.546 [2024-11-18 20:36:56.498891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.546 [2024-11-18 20:36:56.498920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.546 [2024-11-18 20:36:56.504000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.546 [2024-11-18 20:36:56.504079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.546 [2024-11-18 20:36:56.504108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.546 [2024-11-18 20:36:56.509061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.546 [2024-11-18 20:36:56.509174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.546 [2024-11-18 20:36:56.509203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.546 [2024-11-18 20:36:56.514165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.546 [2024-11-18 20:36:56.514261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.546 [2024-11-18 20:36:56.514291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.546 [2024-11-18 20:36:56.519310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.546 [2024-11-18 20:36:56.519395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.546 [2024-11-18 20:36:56.519424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.546 [2024-11-18 20:36:56.524331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.546 [2024-11-18 20:36:56.524431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.546 [2024-11-18 20:36:56.524466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.546 [2024-11-18 20:36:56.529428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.546 [2024-11-18 20:36:56.529519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.546 [2024-11-18 20:36:56.529547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.546 [2024-11-18 20:36:56.534562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.546 [2024-11-18 20:36:56.534665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.546 [2024-11-18 20:36:56.534694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.546 [2024-11-18 20:36:56.540158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.546 [2024-11-18 20:36:56.540227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.546 [2024-11-18 20:36:56.540254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.546 [2024-11-18 20:36:56.546110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.546 [2024-11-18 20:36:56.546186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.546 [2024-11-18 20:36:56.546213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.806 [2024-11-18 20:36:56.552240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.806 [2024-11-18 20:36:56.552323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.806 [2024-11-18 20:36:56.552353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.806 [2024-11-18 20:36:56.558377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.806 [2024-11-18 20:36:56.558453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.806 [2024-11-18 20:36:56.558480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.806 [2024-11-18 20:36:56.564196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.806 [2024-11-18 20:36:56.564268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.806 [2024-11-18 20:36:56.564295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.806 [2024-11-18 20:36:56.570156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.806 [2024-11-18 20:36:56.570233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.806 [2024-11-18 20:36:56.570263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.806 [2024-11-18 20:36:56.575995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.806 [2024-11-18 20:36:56.576104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.806 [2024-11-18 20:36:56.576133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.806 [2024-11-18 20:36:56.581814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.806 [2024-11-18 20:36:56.581887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.806 [2024-11-18 20:36:56.581914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.806 [2024-11-18 20:36:56.587819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.806 [2024-11-18 20:36:56.587919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.806 [2024-11-18 20:36:56.587947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.806 [2024-11-18 20:36:56.593034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.806 [2024-11-18 20:36:56.593135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.806 [2024-11-18 20:36:56.593164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.806 [2024-11-18 20:36:56.598249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.806 [2024-11-18 20:36:56.598342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.806 [2024-11-18 20:36:56.598370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.806 [2024-11-18 20:36:56.603448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.806 [2024-11-18 20:36:56.603542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.806 [2024-11-18 20:36:56.603571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.806 [2024-11-18 20:36:56.608586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.806 [2024-11-18 20:36:56.608735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.608764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.613747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.613829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.613857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.619843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.619917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.619945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.624922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.625025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.625054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.630063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.630158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.630186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.635203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.635300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.635329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.640325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.640410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.640442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.807 5463.00 IOPS, 682.88 MiB/s [2024-11-18T19:36:56.815Z] [2024-11-18 20:36:56.646693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.646788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.646815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.652556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.652631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.652667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.659792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.659971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.660000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.666364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.666463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.666491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.671900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.672033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.672072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.677175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.677309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.677337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.682787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.682951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.682979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.689210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.689319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.689347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.696553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.696763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.696792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.703766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.703838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.703865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.711426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.711539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.711567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.718972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.719181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.719210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.725315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.725497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.807 [2024-11-18 20:36:56.725526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.807 [2024-11-18 20:36:56.731553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.807 [2024-11-18 20:36:56.731701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.731730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.737090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.737174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.737202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.742313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.742413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.742440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.748104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.748261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.748289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.754424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.754509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.754538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.759507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.759591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.759619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.764610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.764715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.764744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.769854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.769958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.769986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.774912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.774988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.775015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.780206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.780297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.780325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.785294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.785399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.785427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.790402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.790491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.790519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.795658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.795743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.795771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.800805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.800908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.800937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.805950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.806039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.806067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.808 [2024-11-18 20:36:56.811182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:44.808 [2024-11-18 20:36:56.811284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.808 [2024-11-18 20:36:56.811313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.067 [2024-11-18 20:36:56.817851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.067 [2024-11-18 20:36:56.817931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.067 [2024-11-18 20:36:56.817959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.067 [2024-11-18 20:36:56.823014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.067 [2024-11-18 20:36:56.823108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.067 [2024-11-18 20:36:56.823143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.067 [2024-11-18 20:36:56.828177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.067 [2024-11-18 20:36:56.828268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.067 [2024-11-18 20:36:56.828296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.833251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.833347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.833375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.838280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.838379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.838408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.843328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.843413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.843441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.848528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.848613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.848649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.853671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.853773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.853800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.859465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.859543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.859572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.865502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.865633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.865670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.871838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.872007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.872035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.878230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.878405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.878434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.884519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.884705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.884734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.890807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.890987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.891015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.897215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.897429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.897457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.903470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.903666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.903694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.909851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.910041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.910069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.916179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.916363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.916392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.922619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.922815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.922844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.928841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.929012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.929040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.935312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.935520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.935548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.941525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.941715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.941743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.947937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.948148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.948178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.954238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.954461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.954490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.960407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.960583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.960611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.966731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.966903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.966931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.973017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.973195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.973224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.979262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.979430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.979465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.985716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.985889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.068 [2024-11-18 20:36:56.985918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.068 [2024-11-18 20:36:56.991824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.068 [2024-11-18 20:36:56.992015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:56.992043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.069 [2024-11-18 20:36:56.998167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.069 [2024-11-18 20:36:56.998360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:56.998388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.069 [2024-11-18 20:36:57.004467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.069 [2024-11-18 20:36:57.004656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:57.004685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.069 [2024-11-18 20:36:57.010163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.069 [2024-11-18 20:36:57.010267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:57.010295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.069 [2024-11-18 20:36:57.015460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.069 [2024-11-18 20:36:57.015544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:57.015572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.069 [2024-11-18 20:36:57.020615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.069 [2024-11-18 20:36:57.020721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:57.020751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.069 [2024-11-18 20:36:57.026325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.069 [2024-11-18 20:36:57.026396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:57.026423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.069 [2024-11-18 20:36:57.032768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.069 [2024-11-18 20:36:57.032869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:57.032898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.069 [2024-11-18 20:36:57.038588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.069 [2024-11-18 20:36:57.038777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:57.038806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.069 [2024-11-18 20:36:57.045909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.069 [2024-11-18 20:36:57.046105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:57.046133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.069 [2024-11-18 20:36:57.051680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.069 [2024-11-18 20:36:57.051810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:57.051838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.069 [2024-11-18 20:36:57.056872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.069 [2024-11-18 20:36:57.057010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:57.057039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.069 [2024-11-18 20:36:57.061978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.069 [2024-11-18 20:36:57.062085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:57.062113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.069 [2024-11-18 20:36:57.067091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.069 [2024-11-18 20:36:57.067211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:57.067241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.069 [2024-11-18 20:36:57.072152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.069 [2024-11-18 20:36:57.072236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.069 [2024-11-18 20:36:57.072264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.077406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.077520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.077549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.082456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.082560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.082588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.087889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.087981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.088009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.093957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.094029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.094056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.099502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.099595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.099624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.104518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.104587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.104614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.109618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.109707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.109737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.114751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.114830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.114860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.119949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.120046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.120075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.125060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.125157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.125192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.130185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.130260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.130288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.135241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.135343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.135372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.140475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.140571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.140599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.145543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.145624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.145660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.150729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.150824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.150853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.156076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.156173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.156202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.161137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.161222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.161249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.166997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.167067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.167094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.330 [2024-11-18 20:36:57.172806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.330 [2024-11-18 20:36:57.172907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.330 [2024-11-18 20:36:57.172936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.177902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.177970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.177997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.183015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.183093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.183125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.188167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.188262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.188290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.193340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.193441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.193469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.198560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.198631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.198664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.205160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.205238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.205269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.211478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.211682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.211710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.217900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.218001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.218030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.224212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.224383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.224412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.229412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.229509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.229537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.234584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.234739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.234767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.239558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.239663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.239692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.244481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.244572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.244600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.249811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.249932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.249960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.255180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.255288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.255316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.260472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.260575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.260603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.265670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.265842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.265877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.272078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.272289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.272317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.277556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.277706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.277735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.282825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.282972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.283001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.287938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.288047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.288075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.292964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.293061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.293089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.298042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.298154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.298182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.303393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.303565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.303593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.309705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.309886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.309915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.314796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.314942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.314971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.331 [2024-11-18 20:36:57.319884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.331 [2024-11-18 20:36:57.320010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.331 [2024-11-18 20:36:57.320039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.332 [2024-11-18 20:36:57.325113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.332 [2024-11-18 20:36:57.325211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.332 [2024-11-18 20:36:57.325239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.332 [2024-11-18 20:36:57.330303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.332 [2024-11-18 20:36:57.330434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.332 [2024-11-18 20:36:57.330463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.332 [2024-11-18 20:36:57.335520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.332 [2024-11-18 20:36:57.335663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.332 [2024-11-18 20:36:57.335692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.341246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.341336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.341364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.346259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.346349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.346377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.351467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.351556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.351584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.356715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.356810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.356838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.361866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.361957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.361985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.367238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.367330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.367359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.372418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.372515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.372543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.377589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.377685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.377713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.382689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.382802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.382830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.388198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.388269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.388297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.394226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.394298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.394325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.400220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.400324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.400352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.406222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.406298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.406333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.412173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.412249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.412275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.418263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.418341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.418368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.424430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.424506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.424533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.430763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.430847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.430877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.436906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.436978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.437005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.443090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.443175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.443204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.449067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.449146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.449173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.455137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.455210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.455237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.461203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.461285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.461317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.467298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.467396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.467424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.473009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.473086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.473113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.478213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.478290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.478318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.592 [2024-11-18 20:36:57.483254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.592 [2024-11-18 20:36:57.483364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.592 [2024-11-18 20:36:57.483408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.488555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.488651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.488679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.493823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.493909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.493938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.499081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.499178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.499205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.504355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.504450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.504478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.509540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.509625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.509677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.514645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.514731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.514774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.519772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.519882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.519911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.525115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.525190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.525218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.531074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.531151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.531192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.536684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.536766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.536794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.542593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.542672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.542700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.548825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.548899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.548930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.554657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.554739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.554777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.559782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.559884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.559913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.564903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.565001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.565030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.570587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.570717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.570746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.576955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.577173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.577201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.583941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.584075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.584101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.590315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.590409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.590435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.593 [2024-11-18 20:36:57.596564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.593 [2024-11-18 20:36:57.596661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.593 [2024-11-18 20:36:57.596689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.852 [2024-11-18 20:36:57.603302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.852 [2024-11-18 20:36:57.603376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.852 [2024-11-18 20:36:57.603403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.852 [2024-11-18 20:36:57.609438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.852 [2024-11-18 20:36:57.609618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.852 [2024-11-18 20:36:57.609680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.852 [2024-11-18 20:36:57.615995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.852 [2024-11-18 20:36:57.616167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.852 [2024-11-18 20:36:57.616197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.852 [2024-11-18 20:36:57.622345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.852 [2024-11-18 20:36:57.622530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.852 [2024-11-18 20:36:57.622559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.852 [2024-11-18 20:36:57.628696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.852 [2024-11-18 20:36:57.628886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.852 [2024-11-18 20:36:57.628915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.852 [2024-11-18 20:36:57.634422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.852 [2024-11-18 20:36:57.634549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.852 [2024-11-18 20:36:57.634578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.852 [2024-11-18 20:36:57.640261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.852 [2024-11-18 20:36:57.640428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.852 [2024-11-18 20:36:57.640457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.852 5449.00 IOPS, 681.12 MiB/s [2024-11-18T19:36:57.860Z] [2024-11-18 20:36:57.647232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c357a0) with pdu=0x2000166ff3c8 00:35:45.852 [2024-11-18 20:36:57.647330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.852 [2024-11-18 20:36:57.647358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.852 00:35:45.852 Latency(us) 00:35:45.852 [2024-11-18T19:36:57.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:45.852 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:45.852 nvme0n1 : 2.00 5446.78 680.85 0.00 0.00 2929.64 2075.31 10145.94 00:35:45.852 [2024-11-18T19:36:57.860Z] =================================================================================================================== 00:35:45.852 [2024-11-18T19:36:57.860Z] Total : 5446.78 680.85 0.00 0.00 2929.64 2075.31 10145.94 00:35:45.852 { 00:35:45.852 "results": [ 00:35:45.852 { 00:35:45.852 "job": "nvme0n1", 00:35:45.852 "core_mask": "0x2", 00:35:45.852 "workload": "randwrite", 00:35:45.852 "status": "finished", 00:35:45.852 "queue_depth": 16, 00:35:45.852 "io_size": 131072, 00:35:45.852 "runtime": 2.00467, 00:35:45.852 "iops": 5446.781764579707, 00:35:45.852 "mibps": 680.8477205724633, 00:35:45.852 "io_failed": 0, 00:35:45.852 "io_timeout": 0, 00:35:45.852 "avg_latency_us": 2929.6384601764507, 00:35:45.852 "min_latency_us": 2075.306666666667, 00:35:45.852 "max_latency_us": 10145.943703703704 00:35:45.852 } 00:35:45.852 ], 00:35:45.852 "core_count": 1 00:35:45.852 } 00:35:45.852 20:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:45.852 20:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:45.852 20:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:45.853 | .driver_specific 00:35:45.853 | .nvme_error 00:35:45.853 | .status_code 00:35:45.853 | .command_transient_transport_error' 00:35:45.853 20:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:46.111 20:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 353 > 0 )) 00:35:46.111 20:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 398003 00:35:46.111 20:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 398003 ']' 00:35:46.111 20:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 398003 00:35:46.111 20:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:46.111 20:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.111 20:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 398003 00:35:46.111 20:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:46.111 20:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:46.111 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 398003' 00:35:46.111 killing process with pid 398003 00:35:46.111 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 398003 00:35:46.111 Received shutdown signal, test time was about 2.000000 seconds 00:35:46.111 00:35:46.111 Latency(us) 00:35:46.111 [2024-11-18T19:36:58.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.111 [2024-11-18T19:36:58.119Z] =================================================================================================================== 00:35:46.111 [2024-11-18T19:36:58.119Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:46.111 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 398003 00:35:46.369 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 396531 00:35:46.369 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 396531 ']' 00:35:46.369 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 396531 00:35:46.369 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:46.369 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.369 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396531 00:35:46.369 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:46.369 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:46.369 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396531' 00:35:46.369 killing process with pid 396531 00:35:46.369 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 396531 00:35:46.369 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 396531 00:35:46.629 00:35:46.629 real 0m15.422s 00:35:46.629 user 0m31.069s 00:35:46.629 sys 0m4.188s 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:46.629 ************************************ 00:35:46.629 END TEST nvmf_digest_error 00:35:46.629 ************************************ 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:46.629 rmmod nvme_tcp 00:35:46.629 rmmod nvme_fabrics 00:35:46.629 rmmod nvme_keyring 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 396531 ']' 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 396531 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 396531 ']' 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 396531 00:35:46.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (396531) - No such process 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 396531 is not found' 00:35:46.629 Process with pid 396531 is not found 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:46.629 20:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:49.166 00:35:49.166 real 0m35.489s 00:35:49.166 user 1m3.418s 00:35:49.166 sys 0m9.937s 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:49.166 ************************************ 00:35:49.166 END TEST nvmf_digest 00:35:49.166 ************************************ 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.166 ************************************ 00:35:49.166 START TEST nvmf_bdevperf 00:35:49.166 ************************************ 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:49.166 * Looking for test storage... 00:35:49.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:49.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.166 --rc genhtml_branch_coverage=1 00:35:49.166 --rc genhtml_function_coverage=1 00:35:49.166 --rc genhtml_legend=1 00:35:49.166 --rc geninfo_all_blocks=1 00:35:49.166 --rc geninfo_unexecuted_blocks=1 00:35:49.166 00:35:49.166 ' 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:49.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.166 --rc genhtml_branch_coverage=1 00:35:49.166 --rc genhtml_function_coverage=1 00:35:49.166 --rc genhtml_legend=1 00:35:49.166 --rc geninfo_all_blocks=1 00:35:49.166 --rc geninfo_unexecuted_blocks=1 00:35:49.166 00:35:49.166 ' 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:49.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.166 --rc genhtml_branch_coverage=1 00:35:49.166 --rc genhtml_function_coverage=1 00:35:49.166 --rc genhtml_legend=1 00:35:49.166 --rc geninfo_all_blocks=1 00:35:49.166 --rc geninfo_unexecuted_blocks=1 00:35:49.166 00:35:49.166 ' 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:49.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.166 --rc genhtml_branch_coverage=1 00:35:49.166 --rc genhtml_function_coverage=1 00:35:49.166 --rc genhtml_legend=1 00:35:49.166 --rc geninfo_all_blocks=1 00:35:49.166 --rc geninfo_unexecuted_blocks=1 00:35:49.166 00:35:49.166 ' 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.166 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:49.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:49.167 20:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:51.073 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:51.073 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:51.073 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:51.073 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:51.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:51.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:35:51.073 00:35:51.073 --- 10.0.0.2 ping statistics --- 00:35:51.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.073 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:35:51.073 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:51.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:51.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:35:51.073 00:35:51.074 --- 10.0.0.1 ping statistics --- 00:35:51.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.074 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=400365 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 400365 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 400365 ']' 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:51.074 20:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:51.074 [2024-11-18 20:37:02.970202] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:51.074 [2024-11-18 20:37:02.970280] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:51.074 [2024-11-18 20:37:03.040714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:51.332 [2024-11-18 20:37:03.087030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:51.332 [2024-11-18 20:37:03.087076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:51.332 [2024-11-18 20:37:03.087099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:51.332 [2024-11-18 20:37:03.087110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:51.332 [2024-11-18 20:37:03.087119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:51.332 [2024-11-18 20:37:03.088539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:51.332 [2024-11-18 20:37:03.088606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:51.332 [2024-11-18 20:37:03.088609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:51.332 [2024-11-18 20:37:03.229691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:51.332 Malloc0 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:51.332 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.333 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:51.333 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.333 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:51.333 [2024-11-18 20:37:03.289649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.333 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.333 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:51.333 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:51.333 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:51.333 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:51.333 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:51.333 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:51.333 { 00:35:51.333 "params": { 00:35:51.333 "name": "Nvme$subsystem", 00:35:51.333 "trtype": "$TEST_TRANSPORT", 00:35:51.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:51.333 "adrfam": "ipv4", 00:35:51.333 "trsvcid": "$NVMF_PORT", 00:35:51.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:51.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:51.333 "hdgst": ${hdgst:-false}, 00:35:51.333 "ddgst": ${ddgst:-false} 00:35:51.333 }, 00:35:51.333 "method": "bdev_nvme_attach_controller" 00:35:51.333 } 00:35:51.333 EOF 00:35:51.333 )") 00:35:51.333 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:51.333 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:51.333 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:51.333 20:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:51.333 "params": { 00:35:51.333 "name": "Nvme1", 00:35:51.333 "trtype": "tcp", 00:35:51.333 "traddr": "10.0.0.2", 00:35:51.333 "adrfam": "ipv4", 00:35:51.333 "trsvcid": "4420", 00:35:51.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:51.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:51.333 "hdgst": false, 00:35:51.333 "ddgst": false 00:35:51.333 }, 00:35:51.333 "method": "bdev_nvme_attach_controller" 00:35:51.333 }' 00:35:51.333 [2024-11-18 20:37:03.338611] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:51.333 [2024-11-18 20:37:03.338721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400388 ] 00:35:51.592 [2024-11-18 20:37:03.408423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.592 [2024-11-18 20:37:03.455733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.850 Running I/O for 1 seconds... 00:35:53.227 8359.00 IOPS, 32.65 MiB/s 00:35:53.227 Latency(us) 00:35:53.227 [2024-11-18T19:37:05.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:53.227 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:53.227 Verification LBA range: start 0x0 length 0x4000 00:35:53.227 Nvme1n1 : 1.01 8412.23 32.86 0.00 0.00 15146.13 743.35 14369.37 00:35:53.227 [2024-11-18T19:37:05.235Z] =================================================================================================================== 00:35:53.227 [2024-11-18T19:37:05.235Z] Total : 8412.23 32.86 0.00 0.00 15146.13 743.35 14369.37 00:35:53.227 20:37:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=400653 00:35:53.227 20:37:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:53.227 20:37:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:53.227 20:37:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:53.227 20:37:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:53.227 20:37:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:53.227 20:37:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:53.227 20:37:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:53.227 { 00:35:53.227 "params": { 00:35:53.227 "name": "Nvme$subsystem", 00:35:53.227 "trtype": "$TEST_TRANSPORT", 00:35:53.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.227 "adrfam": "ipv4", 00:35:53.227 "trsvcid": "$NVMF_PORT", 00:35:53.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.227 "hdgst": ${hdgst:-false}, 00:35:53.227 "ddgst": ${ddgst:-false} 00:35:53.227 }, 00:35:53.227 "method": "bdev_nvme_attach_controller" 00:35:53.227 } 00:35:53.227 EOF 00:35:53.227 )") 00:35:53.227 20:37:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:53.227 20:37:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:53.227 20:37:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:53.227 20:37:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:53.227 "params": { 00:35:53.227 "name": "Nvme1", 00:35:53.227 "trtype": "tcp", 00:35:53.227 "traddr": "10.0.0.2", 00:35:53.227 "adrfam": "ipv4", 00:35:53.227 "trsvcid": "4420", 00:35:53.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:53.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:53.227 "hdgst": false, 00:35:53.227 "ddgst": false 00:35:53.227 }, 00:35:53.227 "method": "bdev_nvme_attach_controller" 00:35:53.227 }' 00:35:53.227 [2024-11-18 20:37:05.048570] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:53.227 [2024-11-18 20:37:05.048702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400653 ] 00:35:53.227 [2024-11-18 20:37:05.115895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.227 [2024-11-18 20:37:05.160549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.486 Running I/O for 15 seconds... 00:35:55.358 8418.00 IOPS, 32.88 MiB/s [2024-11-18T19:37:08.304Z] 8505.00 IOPS, 33.22 MiB/s [2024-11-18T19:37:08.304Z] 20:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 400365 00:35:56.296 20:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:56.296 [2024-11-18 20:37:08.016229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.296 [2024-11-18 20:37:08.016278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.296 [2024-11-18 20:37:08.016316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.296 [2024-11-18 20:37:08.016334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.296 [2024-11-18 20:37:08.016351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.296 [2024-11-18 20:37:08.016366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.296 [2024-11-18 20:37:08.016384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.296 [2024-11-18 20:37:08.016408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.296 [2024-11-18 20:37:08.016426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.296 [2024-11-18 20:37:08.016440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.296 [2024-11-18 20:37:08.016471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.296 [2024-11-18 20:37:08.016493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.296 [2024-11-18 20:37:08.016509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.296 [2024-11-18 20:37:08.016525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.296 [2024-11-18 20:37:08.016541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.296 [2024-11-18 20:37:08.016556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.296 [2024-11-18 20:37:08.016573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.296 [2024-11-18 20:37:08.016587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.296 [2024-11-18 20:37:08.016605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.296 [2024-11-18 20:37:08.016633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.016669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.297 [2024-11-18 20:37:08.016685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.016702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.297 [2024-11-18 20:37:08.016716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.016733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.297 [2024-11-18 20:37:08.016747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.016763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.297 [2024-11-18 20:37:08.016777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.016793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.297 [2024-11-18 20:37:08.016808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.016824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.297 [2024-11-18 20:37:08.016838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.016854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.297 [2024-11-18 20:37:08.016870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.016886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.297 [2024-11-18 20:37:08.016902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.016922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.297 [2024-11-18 20:37:08.016965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.016984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.297 [2024-11-18 20:37:08.017836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.297 [2024-11-18 20:37:08.017852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.298 [2024-11-18 20:37:08.017867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.017883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.298 [2024-11-18 20:37:08.017898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.017914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.017957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.017974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.017988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.298 [2024-11-18 20:37:08.018951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.298 [2024-11-18 20:37:08.018967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.018997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.019978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.019993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.020021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.020037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.020050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.020065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.020078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.020108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.020122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.020136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.020149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.020178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.020191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.020210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.020224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.020239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.020252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.020267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.299 [2024-11-18 20:37:08.020281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.299 [2024-11-18 20:37:08.020295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.300 [2024-11-18 20:37:08.020308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.300 [2024-11-18 20:37:08.020329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.300 [2024-11-18 20:37:08.020342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.300 [2024-11-18 20:37:08.020357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.300 [2024-11-18 20:37:08.020370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.300 [2024-11-18 20:37:08.020385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.300 [2024-11-18 20:37:08.020398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.300 [2024-11-18 20:37:08.020413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.300 [2024-11-18 20:37:08.020426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.300 [2024-11-18 20:37:08.020440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.300 [2024-11-18 20:37:08.020454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.300 [2024-11-18 20:37:08.020468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.300 [2024-11-18 20:37:08.020481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.300 [2024-11-18 20:37:08.020496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.300 [2024-11-18 20:37:08.020509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.300 [2024-11-18 20:37:08.020523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.300 [2024-11-18 20:37:08.020536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.300 [2024-11-18 20:37:08.020551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbaf20 is same with the state(6) to be set 00:35:56.300 [2024-11-18 20:37:08.020568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:56.300 [2024-11-18 20:37:08.020580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:56.300 [2024-11-18 20:37:08.020591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51560 len:8 PRP1 0x0 PRP2 0x0 00:35:56.300 [2024-11-18 20:37:08.020604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.300 [2024-11-18 20:37:08.020761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.300 [2024-11-18 20:37:08.020785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.300 [2024-11-18 20:37:08.020801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.300 [2024-11-18 20:37:08.020821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.300 [2024-11-18 20:37:08.020836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.300 [2024-11-18 20:37:08.020855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.300 [2024-11-18 20:37:08.020870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.300 [2024-11-18 20:37:08.020884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.300 [2024-11-18 20:37:08.020897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.300 [2024-11-18 20:37:08.024298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.300 [2024-11-18 20:37:08.024335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.300 [2024-11-18 20:37:08.024924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.300 [2024-11-18 20:37:08.024960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.300 [2024-11-18 20:37:08.024979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.300 [2024-11-18 20:37:08.025242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.300 [2024-11-18 20:37:08.025457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.300 [2024-11-18 20:37:08.025478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.300 [2024-11-18 20:37:08.025495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.300 [2024-11-18 20:37:08.025511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.300 [2024-11-18 20:37:08.038265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.300 [2024-11-18 20:37:08.038669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.300 [2024-11-18 20:37:08.038701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.300 [2024-11-18 20:37:08.038719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.300 [2024-11-18 20:37:08.038950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.300 [2024-11-18 20:37:08.039180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.300 [2024-11-18 20:37:08.039200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.300 [2024-11-18 20:37:08.039214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.300 [2024-11-18 20:37:08.039242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.300 [2024-11-18 20:37:08.051965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.300 [2024-11-18 20:37:08.052386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.300 [2024-11-18 20:37:08.052415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.300 [2024-11-18 20:37:08.052432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.300 [2024-11-18 20:37:08.052686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.300 [2024-11-18 20:37:08.052913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.300 [2024-11-18 20:37:08.052937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.300 [2024-11-18 20:37:08.052967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.300 [2024-11-18 20:37:08.052981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.300 [2024-11-18 20:37:08.065433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.300 [2024-11-18 20:37:08.065791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.300 [2024-11-18 20:37:08.065831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.300 [2024-11-18 20:37:08.065849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.300 [2024-11-18 20:37:08.066090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.300 [2024-11-18 20:37:08.066319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.300 [2024-11-18 20:37:08.066339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.301 [2024-11-18 20:37:08.066352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.301 [2024-11-18 20:37:08.066365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.301 [2024-11-18 20:37:08.079066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.301 [2024-11-18 20:37:08.079490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.301 [2024-11-18 20:37:08.079520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.301 [2024-11-18 20:37:08.079537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.301 [2024-11-18 20:37:08.079764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.301 [2024-11-18 20:37:08.080009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.301 [2024-11-18 20:37:08.080029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.301 [2024-11-18 20:37:08.080043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.301 [2024-11-18 20:37:08.080056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.301 [2024-11-18 20:37:08.092375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.301 [2024-11-18 20:37:08.092770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.301 [2024-11-18 20:37:08.092810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.301 [2024-11-18 20:37:08.092826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.301 [2024-11-18 20:37:08.093062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.301 [2024-11-18 20:37:08.093266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.301 [2024-11-18 20:37:08.093285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.301 [2024-11-18 20:37:08.093297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.301 [2024-11-18 20:37:08.093314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.301 [2024-11-18 20:37:08.105702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.301 [2024-11-18 20:37:08.106120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.301 [2024-11-18 20:37:08.106149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.301 [2024-11-18 20:37:08.106171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.301 [2024-11-18 20:37:08.106407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.301 [2024-11-18 20:37:08.106609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.301 [2024-11-18 20:37:08.106653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.301 [2024-11-18 20:37:08.106670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.301 [2024-11-18 20:37:08.106685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.301 [2024-11-18 20:37:08.118971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.301 [2024-11-18 20:37:08.119405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.301 [2024-11-18 20:37:08.119433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.301 [2024-11-18 20:37:08.119450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.301 [2024-11-18 20:37:08.119701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.301 [2024-11-18 20:37:08.119913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.301 [2024-11-18 20:37:08.119934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.301 [2024-11-18 20:37:08.119948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.301 [2024-11-18 20:37:08.119976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.301 [2024-11-18 20:37:08.132119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.301 [2024-11-18 20:37:08.132475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.301 [2024-11-18 20:37:08.132503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.301 [2024-11-18 20:37:08.132521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.301 [2024-11-18 20:37:08.132775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.301 [2024-11-18 20:37:08.133004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.301 [2024-11-18 20:37:08.133024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.301 [2024-11-18 20:37:08.133037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.301 [2024-11-18 20:37:08.133049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.301 [2024-11-18 20:37:08.145423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.301 [2024-11-18 20:37:08.145807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.301 [2024-11-18 20:37:08.145847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.301 [2024-11-18 20:37:08.145864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.301 [2024-11-18 20:37:08.146105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.301 [2024-11-18 20:37:08.146316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.301 [2024-11-18 20:37:08.146336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.301 [2024-11-18 20:37:08.146349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.301 [2024-11-18 20:37:08.146362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.301 [2024-11-18 20:37:08.158410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.301 [2024-11-18 20:37:08.158730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.301 [2024-11-18 20:37:08.158759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.301 [2024-11-18 20:37:08.158775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.301 [2024-11-18 20:37:08.158992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.301 [2024-11-18 20:37:08.159196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.301 [2024-11-18 20:37:08.159216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.301 [2024-11-18 20:37:08.159229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.301 [2024-11-18 20:37:08.159240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.301 [2024-11-18 20:37:08.171451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.301 [2024-11-18 20:37:08.171871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.301 [2024-11-18 20:37:08.171900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.301 [2024-11-18 20:37:08.171923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.301 [2024-11-18 20:37:08.172161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.301 [2024-11-18 20:37:08.172364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.301 [2024-11-18 20:37:08.172383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.302 [2024-11-18 20:37:08.172396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.302 [2024-11-18 20:37:08.172409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.302 [2024-11-18 20:37:08.184591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.302 [2024-11-18 20:37:08.184978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.302 [2024-11-18 20:37:08.185009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.302 [2024-11-18 20:37:08.185025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.302 [2024-11-18 20:37:08.185239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.302 [2024-11-18 20:37:08.185442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.302 [2024-11-18 20:37:08.185461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.302 [2024-11-18 20:37:08.185473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.302 [2024-11-18 20:37:08.185485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.302 [2024-11-18 20:37:08.197580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.302 [2024-11-18 20:37:08.197951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.302 [2024-11-18 20:37:08.198005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.302 [2024-11-18 20:37:08.198022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.302 [2024-11-18 20:37:08.198254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.302 [2024-11-18 20:37:08.198457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.302 [2024-11-18 20:37:08.198475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.302 [2024-11-18 20:37:08.198488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.302 [2024-11-18 20:37:08.198499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.302 [2024-11-18 20:37:08.210608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.302 [2024-11-18 20:37:08.210955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.302 [2024-11-18 20:37:08.210983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.302 [2024-11-18 20:37:08.211000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.302 [2024-11-18 20:37:08.211235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.302 [2024-11-18 20:37:08.211439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.302 [2024-11-18 20:37:08.211458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.302 [2024-11-18 20:37:08.211471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.302 [2024-11-18 20:37:08.211483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.302 [2024-11-18 20:37:08.223563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.302 [2024-11-18 20:37:08.223934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.302 [2024-11-18 20:37:08.223977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.302 [2024-11-18 20:37:08.223993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.302 [2024-11-18 20:37:08.224226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.302 [2024-11-18 20:37:08.224428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.302 [2024-11-18 20:37:08.224452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.302 [2024-11-18 20:37:08.224466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.302 [2024-11-18 20:37:08.224478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.302 [2024-11-18 20:37:08.236543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.302 [2024-11-18 20:37:08.236987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.302 [2024-11-18 20:37:08.237015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.302 [2024-11-18 20:37:08.237032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.302 [2024-11-18 20:37:08.237267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.302 [2024-11-18 20:37:08.237471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.302 [2024-11-18 20:37:08.237490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.302 [2024-11-18 20:37:08.237504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.302 [2024-11-18 20:37:08.237516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.302 [2024-11-18 20:37:08.249652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.302 [2024-11-18 20:37:08.250069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.302 [2024-11-18 20:37:08.250097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.302 [2024-11-18 20:37:08.250119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.302 [2024-11-18 20:37:08.250355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.302 [2024-11-18 20:37:08.250558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.302 [2024-11-18 20:37:08.250577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.302 [2024-11-18 20:37:08.250590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.302 [2024-11-18 20:37:08.250602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.302 [2024-11-18 20:37:08.262858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.302 [2024-11-18 20:37:08.263286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.302 [2024-11-18 20:37:08.263314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.302 [2024-11-18 20:37:08.263332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.302 [2024-11-18 20:37:08.263567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.302 [2024-11-18 20:37:08.263803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.302 [2024-11-18 20:37:08.263824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.302 [2024-11-18 20:37:08.263838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.302 [2024-11-18 20:37:08.263855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.302 [2024-11-18 20:37:08.276058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.302 [2024-11-18 20:37:08.276445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.302 [2024-11-18 20:37:08.276474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.302 [2024-11-18 20:37:08.276491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.302 [2024-11-18 20:37:08.276737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.302 [2024-11-18 20:37:08.276948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.302 [2024-11-18 20:37:08.276978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.302 [2024-11-18 20:37:08.276992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.302 [2024-11-18 20:37:08.277005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.303 [2024-11-18 20:37:08.289325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.303 [2024-11-18 20:37:08.289672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.303 [2024-11-18 20:37:08.289703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.303 [2024-11-18 20:37:08.289721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.303 [2024-11-18 20:37:08.289957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.303 [2024-11-18 20:37:08.290155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.303 [2024-11-18 20:37:08.290175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.303 [2024-11-18 20:37:08.290188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.303 [2024-11-18 20:37:08.290201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.562 [2024-11-18 20:37:08.302990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.562 [2024-11-18 20:37:08.303356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.562 [2024-11-18 20:37:08.303408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.562 [2024-11-18 20:37:08.303427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.562 [2024-11-18 20:37:08.303685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.562 [2024-11-18 20:37:08.303887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.562 [2024-11-18 20:37:08.303909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.562 [2024-11-18 20:37:08.303924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.562 [2024-11-18 20:37:08.303937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.562 [2024-11-18 20:37:08.316275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.562 [2024-11-18 20:37:08.316611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.562 [2024-11-18 20:37:08.316656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.562 [2024-11-18 20:37:08.316679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.562 [2024-11-18 20:37:08.316915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.562 [2024-11-18 20:37:08.317110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.562 [2024-11-18 20:37:08.317130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.562 [2024-11-18 20:37:08.317144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.562 [2024-11-18 20:37:08.317157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.562 [2024-11-18 20:37:08.329434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.562 [2024-11-18 20:37:08.329823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.562 [2024-11-18 20:37:08.329853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.562 [2024-11-18 20:37:08.329870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.562 [2024-11-18 20:37:08.330094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.562 [2024-11-18 20:37:08.330302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.562 [2024-11-18 20:37:08.330323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.562 [2024-11-18 20:37:08.330337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.562 [2024-11-18 20:37:08.330350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.562 [2024-11-18 20:37:08.342551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.562 [2024-11-18 20:37:08.342885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.562 [2024-11-18 20:37:08.342914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.562 [2024-11-18 20:37:08.342930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.563 [2024-11-18 20:37:08.343127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.563 [2024-11-18 20:37:08.343350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.563 [2024-11-18 20:37:08.343370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.563 [2024-11-18 20:37:08.343383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.563 [2024-11-18 20:37:08.343395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.563 7578.33 IOPS, 29.60 MiB/s [2024-11-18T19:37:08.571Z] [2024-11-18 20:37:08.355795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.563 [2024-11-18 20:37:08.356206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.563 [2024-11-18 20:37:08.356236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.563 [2024-11-18 20:37:08.356253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.563 [2024-11-18 20:37:08.356493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.563 [2024-11-18 20:37:08.356750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.563 [2024-11-18 20:37:08.356773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.563 [2024-11-18 20:37:08.356788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.563 [2024-11-18 20:37:08.356802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.563 [2024-11-18 20:37:08.368880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.563 [2024-11-18 20:37:08.369288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.563 [2024-11-18 20:37:08.369317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.563 [2024-11-18 20:37:08.369334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.563 [2024-11-18 20:37:08.369572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.563 [2024-11-18 20:37:08.369812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.563 [2024-11-18 20:37:08.369834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.563 [2024-11-18 20:37:08.369848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.563 [2024-11-18 20:37:08.369862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.563 [2024-11-18 20:37:08.381973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.563 [2024-11-18 20:37:08.382379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.563 [2024-11-18 20:37:08.382408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.563 [2024-11-18 20:37:08.382425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.563 [2024-11-18 20:37:08.382672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.563 [2024-11-18 20:37:08.382885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.563 [2024-11-18 20:37:08.382906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.563 [2024-11-18 20:37:08.382920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.563 [2024-11-18 20:37:08.382933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.563 [2024-11-18 20:37:08.395027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.563 [2024-11-18 20:37:08.395436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.563 [2024-11-18 20:37:08.395465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.563 [2024-11-18 20:37:08.395482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.563 [2024-11-18 20:37:08.395715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.563 [2024-11-18 20:37:08.395921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.563 [2024-11-18 20:37:08.395961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.563 [2024-11-18 20:37:08.395976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.563 [2024-11-18 20:37:08.395988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.563 [2024-11-18 20:37:08.408204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.563 [2024-11-18 20:37:08.408548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.563 [2024-11-18 20:37:08.408576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.563 [2024-11-18 20:37:08.408593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.563 [2024-11-18 20:37:08.408859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.563 [2024-11-18 20:37:08.409080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.563 [2024-11-18 20:37:08.409101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.563 [2024-11-18 20:37:08.409114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.563 [2024-11-18 20:37:08.409126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.563 [2024-11-18 20:37:08.421213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.563 [2024-11-18 20:37:08.421555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.563 [2024-11-18 20:37:08.421582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.563 [2024-11-18 20:37:08.421599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.563 [2024-11-18 20:37:08.421845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.563 [2024-11-18 20:37:08.422066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.563 [2024-11-18 20:37:08.422086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.563 [2024-11-18 20:37:08.422100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.563 [2024-11-18 20:37:08.422113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.563 [2024-11-18 20:37:08.434314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.563 [2024-11-18 20:37:08.434659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.563 [2024-11-18 20:37:08.434689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.563 [2024-11-18 20:37:08.434706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.563 [2024-11-18 20:37:08.434943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.563 [2024-11-18 20:37:08.435146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.563 [2024-11-18 20:37:08.435166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.563 [2024-11-18 20:37:08.435179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.563 [2024-11-18 20:37:08.435196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.563 [2024-11-18 20:37:08.447388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.563 [2024-11-18 20:37:08.447744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.563 [2024-11-18 20:37:08.447774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.563 [2024-11-18 20:37:08.447791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.563 [2024-11-18 20:37:08.448026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.563 [2024-11-18 20:37:08.448228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.563 [2024-11-18 20:37:08.448248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.563 [2024-11-18 20:37:08.448262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.563 [2024-11-18 20:37:08.448274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.563 [2024-11-18 20:37:08.460452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.564 [2024-11-18 20:37:08.460801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.564 [2024-11-18 20:37:08.460830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.564 [2024-11-18 20:37:08.460847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.564 [2024-11-18 20:37:08.461082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.564 [2024-11-18 20:37:08.461286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.564 [2024-11-18 20:37:08.461306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.564 [2024-11-18 20:37:08.461320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.564 [2024-11-18 20:37:08.461332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.564 [2024-11-18 20:37:08.473481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.564 [2024-11-18 20:37:08.473833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.564 [2024-11-18 20:37:08.473862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.564 [2024-11-18 20:37:08.473879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.564 [2024-11-18 20:37:08.474119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.564 [2024-11-18 20:37:08.474321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.564 [2024-11-18 20:37:08.474356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.564 [2024-11-18 20:37:08.474369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.564 [2024-11-18 20:37:08.474382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.564 [2024-11-18 20:37:08.486620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.564 [2024-11-18 20:37:08.486972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.564 [2024-11-18 20:37:08.487001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.564 [2024-11-18 20:37:08.487017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.564 [2024-11-18 20:37:08.487235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.564 [2024-11-18 20:37:08.487437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.564 [2024-11-18 20:37:08.487457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.564 [2024-11-18 20:37:08.487471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.564 [2024-11-18 20:37:08.487483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.564 [2024-11-18 20:37:08.499774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.564 [2024-11-18 20:37:08.500116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.564 [2024-11-18 20:37:08.500144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.564 [2024-11-18 20:37:08.500161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.564 [2024-11-18 20:37:08.500391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.564 [2024-11-18 20:37:08.500581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.564 [2024-11-18 20:37:08.500601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.564 [2024-11-18 20:37:08.500628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.564 [2024-11-18 20:37:08.500654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.564 [2024-11-18 20:37:08.512797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.564 [2024-11-18 20:37:08.513203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.564 [2024-11-18 20:37:08.513233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.564 [2024-11-18 20:37:08.513250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.564 [2024-11-18 20:37:08.513486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.564 [2024-11-18 20:37:08.513733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.564 [2024-11-18 20:37:08.513755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.564 [2024-11-18 20:37:08.513771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.564 [2024-11-18 20:37:08.513784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.564 [2024-11-18 20:37:08.525879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.564 [2024-11-18 20:37:08.526225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.564 [2024-11-18 20:37:08.526254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.564 [2024-11-18 20:37:08.526271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.564 [2024-11-18 20:37:08.526515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.564 [2024-11-18 20:37:08.526761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.564 [2024-11-18 20:37:08.526784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.564 [2024-11-18 20:37:08.526798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.564 [2024-11-18 20:37:08.526811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.564 [2024-11-18 20:37:08.539345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.564 [2024-11-18 20:37:08.539755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.564 [2024-11-18 20:37:08.539784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.564 [2024-11-18 20:37:08.539801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.564 [2024-11-18 20:37:08.540037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.564 [2024-11-18 20:37:08.540249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.564 [2024-11-18 20:37:08.540270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.564 [2024-11-18 20:37:08.540284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.564 [2024-11-18 20:37:08.540296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.564 [2024-11-18 20:37:08.552463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.564 [2024-11-18 20:37:08.552841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.564 [2024-11-18 20:37:08.552870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.564 [2024-11-18 20:37:08.552887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.564 [2024-11-18 20:37:08.553123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.564 [2024-11-18 20:37:08.553325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.564 [2024-11-18 20:37:08.553345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.564 [2024-11-18 20:37:08.553358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.564 [2024-11-18 20:37:08.553371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.564 [2024-11-18 20:37:08.565836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.564 [2024-11-18 20:37:08.566262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.564 [2024-11-18 20:37:08.566293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.564 [2024-11-18 20:37:08.566312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.564 [2024-11-18 20:37:08.566568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.564 [2024-11-18 20:37:08.566853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.564 [2024-11-18 20:37:08.566898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.564 [2024-11-18 20:37:08.566915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.564 [2024-11-18 20:37:08.566931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.824 [2024-11-18 20:37:08.579017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.824 [2024-11-18 20:37:08.579428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.824 [2024-11-18 20:37:08.579459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.824 [2024-11-18 20:37:08.579477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.824 [2024-11-18 20:37:08.579729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.824 [2024-11-18 20:37:08.579944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.824 [2024-11-18 20:37:08.579966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.824 [2024-11-18 20:37:08.579993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.824 [2024-11-18 20:37:08.580007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.824 [2024-11-18 20:37:08.592114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.824 [2024-11-18 20:37:08.592572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.824 [2024-11-18 20:37:08.592624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.824 [2024-11-18 20:37:08.592653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.824 [2024-11-18 20:37:08.592902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.824 [2024-11-18 20:37:08.593104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.824 [2024-11-18 20:37:08.593124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.824 [2024-11-18 20:37:08.593138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.824 [2024-11-18 20:37:08.593151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.824 [2024-11-18 20:37:08.605205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.824 [2024-11-18 20:37:08.605549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.824 [2024-11-18 20:37:08.605578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.824 [2024-11-18 20:37:08.605596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.824 [2024-11-18 20:37:08.605862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.824 [2024-11-18 20:37:08.606091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.824 [2024-11-18 20:37:08.606112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.824 [2024-11-18 20:37:08.606126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.824 [2024-11-18 20:37:08.606143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.824 [2024-11-18 20:37:08.618219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.824 [2024-11-18 20:37:08.618628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.824 [2024-11-18 20:37:08.618679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.824 [2024-11-18 20:37:08.618697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.824 [2024-11-18 20:37:08.618933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.824 [2024-11-18 20:37:08.619135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.824 [2024-11-18 20:37:08.619155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.824 [2024-11-18 20:37:08.619168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.824 [2024-11-18 20:37:08.619180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.825 [2024-11-18 20:37:08.631251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.825 [2024-11-18 20:37:08.631659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.825 [2024-11-18 20:37:08.631689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.825 [2024-11-18 20:37:08.631707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.825 [2024-11-18 20:37:08.631944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.825 [2024-11-18 20:37:08.632149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.825 [2024-11-18 20:37:08.632169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.825 [2024-11-18 20:37:08.632182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.825 [2024-11-18 20:37:08.632194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.825 [2024-11-18 20:37:08.644307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.825 [2024-11-18 20:37:08.644618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.825 [2024-11-18 20:37:08.644657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.825 [2024-11-18 20:37:08.644690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.825 [2024-11-18 20:37:08.644926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.825 [2024-11-18 20:37:08.645130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.825 [2024-11-18 20:37:08.645150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.825 [2024-11-18 20:37:08.645164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.825 [2024-11-18 20:37:08.645176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.825 [2024-11-18 20:37:08.657488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.825 [2024-11-18 20:37:08.657868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.825 [2024-11-18 20:37:08.657904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.825 [2024-11-18 20:37:08.657922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.825 [2024-11-18 20:37:08.658172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.825 [2024-11-18 20:37:08.658381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.825 [2024-11-18 20:37:08.658401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.825 [2024-11-18 20:37:08.658415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.825 [2024-11-18 20:37:08.658428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.825 [2024-11-18 20:37:08.670631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.825 [2024-11-18 20:37:08.671059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.825 [2024-11-18 20:37:08.671087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.825 [2024-11-18 20:37:08.671104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.825 [2024-11-18 20:37:08.671352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.825 [2024-11-18 20:37:08.671540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.825 [2024-11-18 20:37:08.671560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.825 [2024-11-18 20:37:08.671573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.825 [2024-11-18 20:37:08.671585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.825 [2024-11-18 20:37:08.683831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.825 [2024-11-18 20:37:08.684260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.825 [2024-11-18 20:37:08.684311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.825 [2024-11-18 20:37:08.684327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.825 [2024-11-18 20:37:08.684567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.825 [2024-11-18 20:37:08.684785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.825 [2024-11-18 20:37:08.684806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.825 [2024-11-18 20:37:08.684819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.825 [2024-11-18 20:37:08.684832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.825 [2024-11-18 20:37:08.697215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.825 [2024-11-18 20:37:08.697561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.825 [2024-11-18 20:37:08.697590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.825 [2024-11-18 20:37:08.697607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.825 [2024-11-18 20:37:08.697855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.825 [2024-11-18 20:37:08.698070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.825 [2024-11-18 20:37:08.698091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.825 [2024-11-18 20:37:08.698104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.825 [2024-11-18 20:37:08.698117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.825 [2024-11-18 20:37:08.710590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.825 [2024-11-18 20:37:08.711002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.825 [2024-11-18 20:37:08.711030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.825 [2024-11-18 20:37:08.711045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.825 [2024-11-18 20:37:08.711262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.825 [2024-11-18 20:37:08.711465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.825 [2024-11-18 20:37:08.711485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.825 [2024-11-18 20:37:08.711498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.825 [2024-11-18 20:37:08.711510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.825 [2024-11-18 20:37:08.723799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.825 [2024-11-18 20:37:08.724301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.825 [2024-11-18 20:37:08.724353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.825 [2024-11-18 20:37:08.724370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.825 [2024-11-18 20:37:08.724615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.825 [2024-11-18 20:37:08.724857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.825 [2024-11-18 20:37:08.724878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.825 [2024-11-18 20:37:08.724892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.825 [2024-11-18 20:37:08.724906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.825 [2024-11-18 20:37:08.737078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.825 [2024-11-18 20:37:08.737439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.825 [2024-11-18 20:37:08.737466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.825 [2024-11-18 20:37:08.737482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.825 [2024-11-18 20:37:08.737724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.825 [2024-11-18 20:37:08.737946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.825 [2024-11-18 20:37:08.737972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.825 [2024-11-18 20:37:08.738001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.825 [2024-11-18 20:37:08.738014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.825 [2024-11-18 20:37:08.750155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.825 [2024-11-18 20:37:08.750474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.825 [2024-11-18 20:37:08.750541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.825 [2024-11-18 20:37:08.750558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.825 [2024-11-18 20:37:08.750798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.825 [2024-11-18 20:37:08.751022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.826 [2024-11-18 20:37:08.751043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.826 [2024-11-18 20:37:08.751056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.826 [2024-11-18 20:37:08.751068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.826 [2024-11-18 20:37:08.763196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.826 [2024-11-18 20:37:08.763622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.826 [2024-11-18 20:37:08.763679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.826 [2024-11-18 20:37:08.763697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.826 [2024-11-18 20:37:08.763942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.826 [2024-11-18 20:37:08.764130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.826 [2024-11-18 20:37:08.764150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.826 [2024-11-18 20:37:08.764163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.826 [2024-11-18 20:37:08.764175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.826 [2024-11-18 20:37:08.776267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.826 [2024-11-18 20:37:08.776592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.826 [2024-11-18 20:37:08.776666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.826 [2024-11-18 20:37:08.776684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.826 [2024-11-18 20:37:08.776947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.826 [2024-11-18 20:37:08.777152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.826 [2024-11-18 20:37:08.777172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.826 [2024-11-18 20:37:08.777185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.826 [2024-11-18 20:37:08.777197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.826 [2024-11-18 20:37:08.789705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.826 [2024-11-18 20:37:08.790123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.826 [2024-11-18 20:37:08.790152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.826 [2024-11-18 20:37:08.790169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.826 [2024-11-18 20:37:08.790406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.826 [2024-11-18 20:37:08.790609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.826 [2024-11-18 20:37:08.790655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.826 [2024-11-18 20:37:08.790671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.826 [2024-11-18 20:37:08.790699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.826 [2024-11-18 20:37:08.802671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.826 [2024-11-18 20:37:08.802962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.826 [2024-11-18 20:37:08.803004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.826 [2024-11-18 20:37:08.803020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.826 [2024-11-18 20:37:08.803216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.826 [2024-11-18 20:37:08.803437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.826 [2024-11-18 20:37:08.803457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.826 [2024-11-18 20:37:08.803471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.826 [2024-11-18 20:37:08.803484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.826 [2024-11-18 20:37:08.815772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.826 [2024-11-18 20:37:08.816142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.826 [2024-11-18 20:37:08.816170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.826 [2024-11-18 20:37:08.816187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.826 [2024-11-18 20:37:08.816422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.826 [2024-11-18 20:37:08.816649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.826 [2024-11-18 20:37:08.816670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.826 [2024-11-18 20:37:08.816698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.826 [2024-11-18 20:37:08.816712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:56.826 [2024-11-18 20:37:08.829508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:56.826 [2024-11-18 20:37:08.829938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.826 [2024-11-18 20:37:08.830008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:56.826 [2024-11-18 20:37:08.830051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:56.826 [2024-11-18 20:37:08.830323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:56.826 [2024-11-18 20:37:08.830563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:56.826 [2024-11-18 20:37:08.830586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:56.826 [2024-11-18 20:37:08.830615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:56.826 [2024-11-18 20:37:08.830630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.086 [2024-11-18 20:37:08.842591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.086 [2024-11-18 20:37:08.842997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.086 [2024-11-18 20:37:08.843051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.086 [2024-11-18 20:37:08.843069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.086 [2024-11-18 20:37:08.843313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.086 [2024-11-18 20:37:08.843503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.086 [2024-11-18 20:37:08.843524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.086 [2024-11-18 20:37:08.843537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.086 [2024-11-18 20:37:08.843550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.086 [2024-11-18 20:37:08.855750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.086 [2024-11-18 20:37:08.856108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.086 [2024-11-18 20:37:08.856138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.086 [2024-11-18 20:37:08.856155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.086 [2024-11-18 20:37:08.856392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.086 [2024-11-18 20:37:08.856597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.086 [2024-11-18 20:37:08.856618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.086 [2024-11-18 20:37:08.856631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.086 [2024-11-18 20:37:08.856673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.086 [2024-11-18 20:37:08.868760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.086 [2024-11-18 20:37:08.869104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.086 [2024-11-18 20:37:08.869133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.086 [2024-11-18 20:37:08.869151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.086 [2024-11-18 20:37:08.869392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.086 [2024-11-18 20:37:08.869597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.086 [2024-11-18 20:37:08.869617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.086 [2024-11-18 20:37:08.869631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.086 [2024-11-18 20:37:08.869671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.086 [2024-11-18 20:37:08.881803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.086 [2024-11-18 20:37:08.882211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.086 [2024-11-18 20:37:08.882240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.086 [2024-11-18 20:37:08.882258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.086 [2024-11-18 20:37:08.882495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.086 [2024-11-18 20:37:08.882744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.086 [2024-11-18 20:37:08.882766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.086 [2024-11-18 20:37:08.882780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.086 [2024-11-18 20:37:08.882794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.086 [2024-11-18 20:37:08.895027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.086 [2024-11-18 20:37:08.895431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.086 [2024-11-18 20:37:08.895459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.086 [2024-11-18 20:37:08.895476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.086 [2024-11-18 20:37:08.895725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.086 [2024-11-18 20:37:08.895941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.086 [2024-11-18 20:37:08.895961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.086 [2024-11-18 20:37:08.895975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.086 [2024-11-18 20:37:08.896003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.086 [2024-11-18 20:37:08.908027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.086 [2024-11-18 20:37:08.908434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.086 [2024-11-18 20:37:08.908462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.086 [2024-11-18 20:37:08.908478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.086 [2024-11-18 20:37:08.908720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.086 [2024-11-18 20:37:08.908913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.086 [2024-11-18 20:37:08.908933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.086 [2024-11-18 20:37:08.908965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.086 [2024-11-18 20:37:08.908978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.086 [2024-11-18 20:37:08.921153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.086 [2024-11-18 20:37:08.921609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.086 [2024-11-18 20:37:08.921670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.086 [2024-11-18 20:37:08.921687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.086 [2024-11-18 20:37:08.921930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.086 [2024-11-18 20:37:08.922117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.086 [2024-11-18 20:37:08.922137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.086 [2024-11-18 20:37:08.922150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.086 [2024-11-18 20:37:08.922163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.086 [2024-11-18 20:37:08.934246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.086 [2024-11-18 20:37:08.934559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.086 [2024-11-18 20:37:08.934632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.086 [2024-11-18 20:37:08.934659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.086 [2024-11-18 20:37:08.934890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.086 [2024-11-18 20:37:08.935094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.086 [2024-11-18 20:37:08.935114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.086 [2024-11-18 20:37:08.935127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.086 [2024-11-18 20:37:08.935140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.086 [2024-11-18 20:37:08.947354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.086 [2024-11-18 20:37:08.947758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.086 [2024-11-18 20:37:08.947787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.086 [2024-11-18 20:37:08.947804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.086 [2024-11-18 20:37:08.948042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.086 [2024-11-18 20:37:08.948244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.086 [2024-11-18 20:37:08.948264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.086 [2024-11-18 20:37:08.948278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.086 [2024-11-18 20:37:08.948290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.086 [2024-11-18 20:37:08.960304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.086 [2024-11-18 20:37:08.960629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.086 [2024-11-18 20:37:08.960664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.086 [2024-11-18 20:37:08.960681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.087 [2024-11-18 20:37:08.960898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.087 [2024-11-18 20:37:08.961102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.087 [2024-11-18 20:37:08.961122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.087 [2024-11-18 20:37:08.961135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.087 [2024-11-18 20:37:08.961148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.087 [2024-11-18 20:37:08.973354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.087 [2024-11-18 20:37:08.973662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.087 [2024-11-18 20:37:08.973689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.087 [2024-11-18 20:37:08.973705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.087 [2024-11-18 20:37:08.973901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.087 [2024-11-18 20:37:08.974119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.087 [2024-11-18 20:37:08.974140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.087 [2024-11-18 20:37:08.974153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.087 [2024-11-18 20:37:08.974165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.087 [2024-11-18 20:37:08.986392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.087 [2024-11-18 20:37:08.986798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.087 [2024-11-18 20:37:08.986827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.087 [2024-11-18 20:37:08.986843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.087 [2024-11-18 20:37:08.987078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.087 [2024-11-18 20:37:08.987266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.087 [2024-11-18 20:37:08.987285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.087 [2024-11-18 20:37:08.987298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.087 [2024-11-18 20:37:08.987311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.087 [2024-11-18 20:37:08.999423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.087 [2024-11-18 20:37:08.999836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.087 [2024-11-18 20:37:08.999865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.087 [2024-11-18 20:37:08.999887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.087 [2024-11-18 20:37:09.000123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.087 [2024-11-18 20:37:09.000326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.087 [2024-11-18 20:37:09.000346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.087 [2024-11-18 20:37:09.000359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.087 [2024-11-18 20:37:09.000372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.087 [2024-11-18 20:37:09.012511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.087 [2024-11-18 20:37:09.012927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.087 [2024-11-18 20:37:09.012956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.087 [2024-11-18 20:37:09.012973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.087 [2024-11-18 20:37:09.013208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.087 [2024-11-18 20:37:09.013411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.087 [2024-11-18 20:37:09.013431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.087 [2024-11-18 20:37:09.013445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.087 [2024-11-18 20:37:09.013457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.087 [2024-11-18 20:37:09.025535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.087 [2024-11-18 20:37:09.025951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.087 [2024-11-18 20:37:09.025980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.087 [2024-11-18 20:37:09.025998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.087 [2024-11-18 20:37:09.026234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.087 [2024-11-18 20:37:09.026437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.087 [2024-11-18 20:37:09.026456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.087 [2024-11-18 20:37:09.026469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.087 [2024-11-18 20:37:09.026481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.087 [2024-11-18 20:37:09.038829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.087 [2024-11-18 20:37:09.039227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.087 [2024-11-18 20:37:09.039298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.087 [2024-11-18 20:37:09.039315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.087 [2024-11-18 20:37:09.039547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.087 [2024-11-18 20:37:09.039785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.087 [2024-11-18 20:37:09.039807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.087 [2024-11-18 20:37:09.039821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.087 [2024-11-18 20:37:09.039833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.087 [2024-11-18 20:37:09.052044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.087 [2024-11-18 20:37:09.052422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.087 [2024-11-18 20:37:09.052493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.087 [2024-11-18 20:37:09.052510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.087 [2024-11-18 20:37:09.052747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.087 [2024-11-18 20:37:09.052956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.087 [2024-11-18 20:37:09.052975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.087 [2024-11-18 20:37:09.052988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.087 [2024-11-18 20:37:09.053000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.087 [2024-11-18 20:37:09.065251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.087 [2024-11-18 20:37:09.065652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.087 [2024-11-18 20:37:09.065725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.087 [2024-11-18 20:37:09.065743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.087 [2024-11-18 20:37:09.065994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.087 [2024-11-18 20:37:09.066182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.087 [2024-11-18 20:37:09.066201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.087 [2024-11-18 20:37:09.066215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.087 [2024-11-18 20:37:09.066227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.087 [2024-11-18 20:37:09.078544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.087 [2024-11-18 20:37:09.079030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.087 [2024-11-18 20:37:09.079081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.087 [2024-11-18 20:37:09.079099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.087 [2024-11-18 20:37:09.079344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.087 [2024-11-18 20:37:09.079532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.087 [2024-11-18 20:37:09.079552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.087 [2024-11-18 20:37:09.079571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.087 [2024-11-18 20:37:09.079585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.351 [2024-11-18 20:37:09.092310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.351 [2024-11-18 20:37:09.092746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.351 [2024-11-18 20:37:09.092802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.351 [2024-11-18 20:37:09.092828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.351 [2024-11-18 20:37:09.093073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.351 [2024-11-18 20:37:09.093293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.351 [2024-11-18 20:37:09.093320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.351 [2024-11-18 20:37:09.093351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.351 [2024-11-18 20:37:09.093364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.351 [2024-11-18 20:37:09.105536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.351 [2024-11-18 20:37:09.105891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.351 [2024-11-18 20:37:09.105942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.351 [2024-11-18 20:37:09.105960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.351 [2024-11-18 20:37:09.106213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.351 [2024-11-18 20:37:09.106402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.351 [2024-11-18 20:37:09.106421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.351 [2024-11-18 20:37:09.106434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.351 [2024-11-18 20:37:09.106447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.351 [2024-11-18 20:37:09.119048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.351 [2024-11-18 20:37:09.119429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.351 [2024-11-18 20:37:09.119458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.351 [2024-11-18 20:37:09.119475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.351 [2024-11-18 20:37:09.119715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.351 [2024-11-18 20:37:09.119948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.351 [2024-11-18 20:37:09.119970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.351 [2024-11-18 20:37:09.119984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.351 [2024-11-18 20:37:09.119997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.351 [2024-11-18 20:37:09.132468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.351 [2024-11-18 20:37:09.132825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.351 [2024-11-18 20:37:09.132857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.351 [2024-11-18 20:37:09.132902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.351 [2024-11-18 20:37:09.133154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.351 [2024-11-18 20:37:09.133347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.351 [2024-11-18 20:37:09.133367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.351 [2024-11-18 20:37:09.133384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.351 [2024-11-18 20:37:09.133396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.351 [2024-11-18 20:37:09.145851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.351 [2024-11-18 20:37:09.146252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.351 [2024-11-18 20:37:09.146280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.351 [2024-11-18 20:37:09.146296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.351 [2024-11-18 20:37:09.146512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.351 [2024-11-18 20:37:09.146754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.351 [2024-11-18 20:37:09.146776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.351 [2024-11-18 20:37:09.146790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.351 [2024-11-18 20:37:09.146803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.351 [2024-11-18 20:37:09.159163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.351 [2024-11-18 20:37:09.159566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.351 [2024-11-18 20:37:09.159595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.351 [2024-11-18 20:37:09.159611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.351 [2024-11-18 20:37:09.159876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.351 [2024-11-18 20:37:09.160082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.351 [2024-11-18 20:37:09.160102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.351 [2024-11-18 20:37:09.160115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.351 [2024-11-18 20:37:09.160127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.351 [2024-11-18 20:37:09.172552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.351 [2024-11-18 20:37:09.172950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.351 [2024-11-18 20:37:09.172995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.351 [2024-11-18 20:37:09.173020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.351 [2024-11-18 20:37:09.173253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.351 [2024-11-18 20:37:09.173447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.351 [2024-11-18 20:37:09.173467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.351 [2024-11-18 20:37:09.173480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.351 [2024-11-18 20:37:09.173493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.351 [2024-11-18 20:37:09.185862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.351 [2024-11-18 20:37:09.186286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.351 [2024-11-18 20:37:09.186317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.351 [2024-11-18 20:37:09.186349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.351 [2024-11-18 20:37:09.186592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.351 [2024-11-18 20:37:09.186821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.351 [2024-11-18 20:37:09.186842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.351 [2024-11-18 20:37:09.186857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.351 [2024-11-18 20:37:09.186870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.351 [2024-11-18 20:37:09.199143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.351 [2024-11-18 20:37:09.199489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.351 [2024-11-18 20:37:09.199518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.351 [2024-11-18 20:37:09.199535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.351 [2024-11-18 20:37:09.199803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.351 [2024-11-18 20:37:09.200034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.351 [2024-11-18 20:37:09.200053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.352 [2024-11-18 20:37:09.200066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.352 [2024-11-18 20:37:09.200078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.352 [2024-11-18 20:37:09.212448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.352 [2024-11-18 20:37:09.212871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.352 [2024-11-18 20:37:09.212911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.352 [2024-11-18 20:37:09.212928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.352 [2024-11-18 20:37:09.213184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.352 [2024-11-18 20:37:09.213377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.352 [2024-11-18 20:37:09.213397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.352 [2024-11-18 20:37:09.213409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.352 [2024-11-18 20:37:09.213422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.352 [2024-11-18 20:37:09.225608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.352 [2024-11-18 20:37:09.226029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.352 [2024-11-18 20:37:09.226060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.352 [2024-11-18 20:37:09.226092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.352 [2024-11-18 20:37:09.226333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.352 [2024-11-18 20:37:09.226522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.352 [2024-11-18 20:37:09.226541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.352 [2024-11-18 20:37:09.226554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.352 [2024-11-18 20:37:09.226566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.352 [2024-11-18 20:37:09.238918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.352 [2024-11-18 20:37:09.239323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.352 [2024-11-18 20:37:09.239356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.352 [2024-11-18 20:37:09.239388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.352 [2024-11-18 20:37:09.239633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.352 [2024-11-18 20:37:09.239851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.352 [2024-11-18 20:37:09.239871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.352 [2024-11-18 20:37:09.239884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.352 [2024-11-18 20:37:09.239896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.352 [2024-11-18 20:37:09.252094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.352 [2024-11-18 20:37:09.252546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.352 [2024-11-18 20:37:09.252577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.352 [2024-11-18 20:37:09.252608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.352 [2024-11-18 20:37:09.252866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.352 [2024-11-18 20:37:09.253075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.352 [2024-11-18 20:37:09.253096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.352 [2024-11-18 20:37:09.253114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.352 [2024-11-18 20:37:09.253128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.352 [2024-11-18 20:37:09.265191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.352 [2024-11-18 20:37:09.265599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.352 [2024-11-18 20:37:09.265626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.352 [2024-11-18 20:37:09.265665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.352 [2024-11-18 20:37:09.265914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.352 [2024-11-18 20:37:09.266133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.352 [2024-11-18 20:37:09.266153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.352 [2024-11-18 20:37:09.266167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.352 [2024-11-18 20:37:09.266179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.352 [2024-11-18 20:37:09.278485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.352 [2024-11-18 20:37:09.278970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.352 [2024-11-18 20:37:09.279024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.352 [2024-11-18 20:37:09.279041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.352 [2024-11-18 20:37:09.279286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.352 [2024-11-18 20:37:09.279497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.352 [2024-11-18 20:37:09.279522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.352 [2024-11-18 20:37:09.279536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.352 [2024-11-18 20:37:09.279563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.352 [2024-11-18 20:37:09.291799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.352 [2024-11-18 20:37:09.292263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.352 [2024-11-18 20:37:09.292311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.352 [2024-11-18 20:37:09.292330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.352 [2024-11-18 20:37:09.292599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.352 [2024-11-18 20:37:09.292841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.352 [2024-11-18 20:37:09.292863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.352 [2024-11-18 20:37:09.292877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.352 [2024-11-18 20:37:09.292890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.352 [2024-11-18 20:37:09.304999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.352 [2024-11-18 20:37:09.305382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.352 [2024-11-18 20:37:09.305451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.352 [2024-11-18 20:37:09.305467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.352 [2024-11-18 20:37:09.305709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.352 [2024-11-18 20:37:09.305931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.352 [2024-11-18 20:37:09.305951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.352 [2024-11-18 20:37:09.305979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.352 [2024-11-18 20:37:09.305992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.352 [2024-11-18 20:37:09.318171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.352 [2024-11-18 20:37:09.318578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.352 [2024-11-18 20:37:09.318609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.352 [2024-11-18 20:37:09.318653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.352 [2024-11-18 20:37:09.318905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.352 [2024-11-18 20:37:09.319125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.352 [2024-11-18 20:37:09.319144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.352 [2024-11-18 20:37:09.319157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.352 [2024-11-18 20:37:09.319169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.352 [2024-11-18 20:37:09.331284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.352 [2024-11-18 20:37:09.331677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.352 [2024-11-18 20:37:09.331726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.352 [2024-11-18 20:37:09.331743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.352 [2024-11-18 20:37:09.331984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.352 [2024-11-18 20:37:09.332172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.352 [2024-11-18 20:37:09.332191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.352 [2024-11-18 20:37:09.332204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.352 [2024-11-18 20:37:09.332216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.352 [2024-11-18 20:37:09.344325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.352 [2024-11-18 20:37:09.344713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.352 [2024-11-18 20:37:09.344741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.352 [2024-11-18 20:37:09.344762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.352 [2024-11-18 20:37:09.344990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.352 [2024-11-18 20:37:09.345198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.352 [2024-11-18 20:37:09.345218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.352 [2024-11-18 20:37:09.345232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.352 [2024-11-18 20:37:09.345244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.613 5683.75 IOPS, 22.20 MiB/s [2024-11-18T19:37:09.621Z] [2024-11-18 20:37:09.357863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.613 [2024-11-18 20:37:09.358229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.613 [2024-11-18 20:37:09.358258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.613 [2024-11-18 20:37:09.358275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.613 [2024-11-18 20:37:09.358505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.613 [2024-11-18 20:37:09.358755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.613 [2024-11-18 20:37:09.358776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.613 [2024-11-18 20:37:09.358790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.613 [2024-11-18 20:37:09.358804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.613 [2024-11-18 20:37:09.370932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.613 [2024-11-18 20:37:09.371283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.613 [2024-11-18 20:37:09.371313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.613 [2024-11-18 20:37:09.371331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.613 [2024-11-18 20:37:09.371566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.613 [2024-11-18 20:37:09.371800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.613 [2024-11-18 20:37:09.371822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.613 [2024-11-18 20:37:09.371835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.613 [2024-11-18 20:37:09.371849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.613 [2024-11-18 20:37:09.384272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.614 [2024-11-18 20:37:09.384616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.614 [2024-11-18 20:37:09.384668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.614 [2024-11-18 20:37:09.384687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.614 [2024-11-18 20:37:09.384930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.614 [2024-11-18 20:37:09.385138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.614 [2024-11-18 20:37:09.385158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.614 [2024-11-18 20:37:09.385170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.614 [2024-11-18 20:37:09.385182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.614 [2024-11-18 20:37:09.397319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.614 [2024-11-18 20:37:09.397664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.614 [2024-11-18 20:37:09.397693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.614 [2024-11-18 20:37:09.397709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.614 [2024-11-18 20:37:09.397945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.614 [2024-11-18 20:37:09.398150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.614 [2024-11-18 20:37:09.398169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.614 [2024-11-18 20:37:09.398182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.614 [2024-11-18 20:37:09.398194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.614 [2024-11-18 20:37:09.410499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.614 [2024-11-18 20:37:09.410910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.614 [2024-11-18 20:37:09.410939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.614 [2024-11-18 20:37:09.410956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.614 [2024-11-18 20:37:09.411191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.614 [2024-11-18 20:37:09.411394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.614 [2024-11-18 20:37:09.411414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.614 [2024-11-18 20:37:09.411427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.614 [2024-11-18 20:37:09.411439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.614 [2024-11-18 20:37:09.423596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.614 [2024-11-18 20:37:09.423975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.614 [2024-11-18 20:37:09.424003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.614 [2024-11-18 20:37:09.424019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.614 [2024-11-18 20:37:09.424235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.614 [2024-11-18 20:37:09.424457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.614 [2024-11-18 20:37:09.424477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.614 [2024-11-18 20:37:09.424495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.614 [2024-11-18 20:37:09.424508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.614 [2024-11-18 20:37:09.436622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.614 [2024-11-18 20:37:09.436970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.614 [2024-11-18 20:37:09.436998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.614 [2024-11-18 20:37:09.437014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.614 [2024-11-18 20:37:09.437230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.614 [2024-11-18 20:37:09.437433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.614 [2024-11-18 20:37:09.437452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.614 [2024-11-18 20:37:09.437465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.614 [2024-11-18 20:37:09.437477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.614 [2024-11-18 20:37:09.449695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.614 [2024-11-18 20:37:09.450038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.614 [2024-11-18 20:37:09.450066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.614 [2024-11-18 20:37:09.450083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.614 [2024-11-18 20:37:09.450318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.614 [2024-11-18 20:37:09.450522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.614 [2024-11-18 20:37:09.450541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.614 [2024-11-18 20:37:09.450554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.614 [2024-11-18 20:37:09.450566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.614 [2024-11-18 20:37:09.462783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.614 [2024-11-18 20:37:09.463125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.614 [2024-11-18 20:37:09.463153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.614 [2024-11-18 20:37:09.463169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.614 [2024-11-18 20:37:09.463399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.614 [2024-11-18 20:37:09.463603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.614 [2024-11-18 20:37:09.463647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.614 [2024-11-18 20:37:09.463663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.614 [2024-11-18 20:37:09.463676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.614 [2024-11-18 20:37:09.475787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.614 [2024-11-18 20:37:09.476121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.614 [2024-11-18 20:37:09.476149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.614 [2024-11-18 20:37:09.476166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.614 [2024-11-18 20:37:09.476382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.614 [2024-11-18 20:37:09.476585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.614 [2024-11-18 20:37:09.476604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.614 [2024-11-18 20:37:09.476617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.614 [2024-11-18 20:37:09.476629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.614 [2024-11-18 20:37:09.488936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.614 [2024-11-18 20:37:09.489353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.614 [2024-11-18 20:37:09.489382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.615 [2024-11-18 20:37:09.489398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.615 [2024-11-18 20:37:09.489645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.615 [2024-11-18 20:37:09.489864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.615 [2024-11-18 20:37:09.489885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.615 [2024-11-18 20:37:09.489900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.615 [2024-11-18 20:37:09.489913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.615 [2024-11-18 20:37:09.501973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.615 [2024-11-18 20:37:09.502318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.615 [2024-11-18 20:37:09.502346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.615 [2024-11-18 20:37:09.502363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.615 [2024-11-18 20:37:09.502599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.615 [2024-11-18 20:37:09.502833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.615 [2024-11-18 20:37:09.502853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.615 [2024-11-18 20:37:09.502867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.615 [2024-11-18 20:37:09.502880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.615 [2024-11-18 20:37:09.514932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.615 [2024-11-18 20:37:09.515267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.615 [2024-11-18 20:37:09.515293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.615 [2024-11-18 20:37:09.515313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.615 [2024-11-18 20:37:09.515524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.615 [2024-11-18 20:37:09.515755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.615 [2024-11-18 20:37:09.515776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.615 [2024-11-18 20:37:09.515789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.615 [2024-11-18 20:37:09.515801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.615 [2024-11-18 20:37:09.528060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.615 [2024-11-18 20:37:09.528367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.615 [2024-11-18 20:37:09.528395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.615 [2024-11-18 20:37:09.528411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.615 [2024-11-18 20:37:09.528703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.615 [2024-11-18 20:37:09.528897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.615 [2024-11-18 20:37:09.528917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.615 [2024-11-18 20:37:09.528930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.615 [2024-11-18 20:37:09.528943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.615 [2024-11-18 20:37:09.541094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.615 [2024-11-18 20:37:09.541502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.615 [2024-11-18 20:37:09.541530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.615 [2024-11-18 20:37:09.541547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.615 [2024-11-18 20:37:09.541810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.615 [2024-11-18 20:37:09.542037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.615 [2024-11-18 20:37:09.542056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.615 [2024-11-18 20:37:09.542069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.615 [2024-11-18 20:37:09.542081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.615 [2024-11-18 20:37:09.554206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.615 [2024-11-18 20:37:09.554552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.615 [2024-11-18 20:37:09.554579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.615 [2024-11-18 20:37:09.554595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.615 [2024-11-18 20:37:09.554859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.615 [2024-11-18 20:37:09.555089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.615 [2024-11-18 20:37:09.555109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.615 [2024-11-18 20:37:09.555122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.615 [2024-11-18 20:37:09.555135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.615 [2024-11-18 20:37:09.567292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.615 [2024-11-18 20:37:09.567697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.615 [2024-11-18 20:37:09.567725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.615 [2024-11-18 20:37:09.567741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.615 [2024-11-18 20:37:09.567971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.615 [2024-11-18 20:37:09.568175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.615 [2024-11-18 20:37:09.568194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.615 [2024-11-18 20:37:09.568206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.615 [2024-11-18 20:37:09.568218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.615 [2024-11-18 20:37:09.580299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.615 [2024-11-18 20:37:09.580705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.615 [2024-11-18 20:37:09.580734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.615 [2024-11-18 20:37:09.580751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.615 [2024-11-18 20:37:09.580984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.615 [2024-11-18 20:37:09.581173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.615 [2024-11-18 20:37:09.581192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.615 [2024-11-18 20:37:09.581204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.615 [2024-11-18 20:37:09.581216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.615 [2024-11-18 20:37:09.593322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.615 [2024-11-18 20:37:09.593677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.615 [2024-11-18 20:37:09.593706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.615 [2024-11-18 20:37:09.593722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.615 [2024-11-18 20:37:09.593958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.615 [2024-11-18 20:37:09.594161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.615 [2024-11-18 20:37:09.594181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.615 [2024-11-18 20:37:09.594193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.615 [2024-11-18 20:37:09.594209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.615 [2024-11-18 20:37:09.606470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.615 [2024-11-18 20:37:09.606776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.615 [2024-11-18 20:37:09.606817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.616 [2024-11-18 20:37:09.606834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.616 [2024-11-18 20:37:09.607053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.616 [2024-11-18 20:37:09.607259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.616 [2024-11-18 20:37:09.607278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.616 [2024-11-18 20:37:09.607291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.616 [2024-11-18 20:37:09.607303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.616 [2024-11-18 20:37:09.620074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.875 [2024-11-18 20:37:09.620502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.876 [2024-11-18 20:37:09.620534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.876 [2024-11-18 20:37:09.620553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.876 [2024-11-18 20:37:09.620816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.876 [2024-11-18 20:37:09.621052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.876 [2024-11-18 20:37:09.621072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.876 [2024-11-18 20:37:09.621085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.876 [2024-11-18 20:37:09.621098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.876 [2024-11-18 20:37:09.633229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.876 [2024-11-18 20:37:09.633643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.876 [2024-11-18 20:37:09.633673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.876 [2024-11-18 20:37:09.633690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.876 [2024-11-18 20:37:09.633925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.876 [2024-11-18 20:37:09.634129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.876 [2024-11-18 20:37:09.634148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.876 [2024-11-18 20:37:09.634161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.876 [2024-11-18 20:37:09.634173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.876 [2024-11-18 20:37:09.646223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.876 [2024-11-18 20:37:09.646571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.876 [2024-11-18 20:37:09.646600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.876 [2024-11-18 20:37:09.646617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.876 [2024-11-18 20:37:09.646881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.876 [2024-11-18 20:37:09.647119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.876 [2024-11-18 20:37:09.647139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.876 [2024-11-18 20:37:09.647152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.876 [2024-11-18 20:37:09.647164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.876 [2024-11-18 20:37:09.659273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.876 [2024-11-18 20:37:09.659619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.876 [2024-11-18 20:37:09.659669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.876 [2024-11-18 20:37:09.659689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.876 [2024-11-18 20:37:09.659928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.876 [2024-11-18 20:37:09.660132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.876 [2024-11-18 20:37:09.660151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.876 [2024-11-18 20:37:09.660163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.876 [2024-11-18 20:37:09.660175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.876 [2024-11-18 20:37:09.672320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.876 [2024-11-18 20:37:09.672736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.876 [2024-11-18 20:37:09.672765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.876 [2024-11-18 20:37:09.672782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.876 [2024-11-18 20:37:09.673024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.876 [2024-11-18 20:37:09.673233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.876 [2024-11-18 20:37:09.673253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.876 [2024-11-18 20:37:09.673266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.876 [2024-11-18 20:37:09.673278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.876 [2024-11-18 20:37:09.685427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.876 [2024-11-18 20:37:09.685797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.876 [2024-11-18 20:37:09.685827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.876 [2024-11-18 20:37:09.685849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.876 [2024-11-18 20:37:09.686102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.876 [2024-11-18 20:37:09.686305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.876 [2024-11-18 20:37:09.686324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.876 [2024-11-18 20:37:09.686336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.876 [2024-11-18 20:37:09.686349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.876 [2024-11-18 20:37:09.699069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.876 [2024-11-18 20:37:09.699421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.876 [2024-11-18 20:37:09.699450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.876 [2024-11-18 20:37:09.699467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.876 [2024-11-18 20:37:09.699718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.876 [2024-11-18 20:37:09.699946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.876 [2024-11-18 20:37:09.699966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.876 [2024-11-18 20:37:09.699995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.876 [2024-11-18 20:37:09.700008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.876 [2024-11-18 20:37:09.712280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.876 [2024-11-18 20:37:09.712624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.876 [2024-11-18 20:37:09.712675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.876 [2024-11-18 20:37:09.712693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.876 [2024-11-18 20:37:09.712908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.876 [2024-11-18 20:37:09.713116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.876 [2024-11-18 20:37:09.713135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.876 [2024-11-18 20:37:09.713148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.876 [2024-11-18 20:37:09.713160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.876 [2024-11-18 20:37:09.725527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.876 [2024-11-18 20:37:09.725874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.876 [2024-11-18 20:37:09.725904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.876 [2024-11-18 20:37:09.725921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.876 [2024-11-18 20:37:09.726173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.876 [2024-11-18 20:37:09.726361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.876 [2024-11-18 20:37:09.726385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.876 [2024-11-18 20:37:09.726399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.876 [2024-11-18 20:37:09.726411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.876 [2024-11-18 20:37:09.738800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.876 [2024-11-18 20:37:09.739248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.876 [2024-11-18 20:37:09.739276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.876 [2024-11-18 20:37:09.739293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.876 [2024-11-18 20:37:09.739527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.876 [2024-11-18 20:37:09.739777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.876 [2024-11-18 20:37:09.739798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.876 [2024-11-18 20:37:09.739813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.877 [2024-11-18 20:37:09.739825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.877 [2024-11-18 20:37:09.751757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.877 [2024-11-18 20:37:09.752097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.877 [2024-11-18 20:37:09.752125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.877 [2024-11-18 20:37:09.752142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.877 [2024-11-18 20:37:09.752358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.877 [2024-11-18 20:37:09.752562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.877 [2024-11-18 20:37:09.752581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.877 [2024-11-18 20:37:09.752593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.877 [2024-11-18 20:37:09.752606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.877 [2024-11-18 20:37:09.764778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.877 [2024-11-18 20:37:09.765121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.877 [2024-11-18 20:37:09.765149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.877 [2024-11-18 20:37:09.765166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.877 [2024-11-18 20:37:09.765402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.877 [2024-11-18 20:37:09.765606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.877 [2024-11-18 20:37:09.765625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.877 [2024-11-18 20:37:09.765660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.877 [2024-11-18 20:37:09.765680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.877 [2024-11-18 20:37:09.777758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.877 [2024-11-18 20:37:09.778099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.877 [2024-11-18 20:37:09.778128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.877 [2024-11-18 20:37:09.778144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.877 [2024-11-18 20:37:09.778378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.877 [2024-11-18 20:37:09.778582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.877 [2024-11-18 20:37:09.778601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.877 [2024-11-18 20:37:09.778614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.877 [2024-11-18 20:37:09.778626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.877 [2024-11-18 20:37:09.790725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.877 [2024-11-18 20:37:09.791035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.877 [2024-11-18 20:37:09.791062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.877 [2024-11-18 20:37:09.791079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.877 [2024-11-18 20:37:09.791295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.877 [2024-11-18 20:37:09.791518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.877 [2024-11-18 20:37:09.791537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.877 [2024-11-18 20:37:09.791551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.877 [2024-11-18 20:37:09.791563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.877 [2024-11-18 20:37:09.803916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.877 [2024-11-18 20:37:09.804298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.877 [2024-11-18 20:37:09.804327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.877 [2024-11-18 20:37:09.804344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.877 [2024-11-18 20:37:09.804578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.877 [2024-11-18 20:37:09.804810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.877 [2024-11-18 20:37:09.804831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.877 [2024-11-18 20:37:09.804844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.877 [2024-11-18 20:37:09.804857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.877 [2024-11-18 20:37:09.817189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.877 [2024-11-18 20:37:09.817604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.877 [2024-11-18 20:37:09.817632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.877 [2024-11-18 20:37:09.817675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.877 [2024-11-18 20:37:09.817918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.877 [2024-11-18 20:37:09.818139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.877 [2024-11-18 20:37:09.818159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.877 [2024-11-18 20:37:09.818171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.877 [2024-11-18 20:37:09.818183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.877 [2024-11-18 20:37:09.830365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.877 [2024-11-18 20:37:09.830771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.877 [2024-11-18 20:37:09.830800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.877 [2024-11-18 20:37:09.830817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.877 [2024-11-18 20:37:09.831052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.877 [2024-11-18 20:37:09.831256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.877 [2024-11-18 20:37:09.831275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.877 [2024-11-18 20:37:09.831288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.877 [2024-11-18 20:37:09.831300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.877 [2024-11-18 20:37:09.843421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.877 [2024-11-18 20:37:09.843832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.877 [2024-11-18 20:37:09.843862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.877 [2024-11-18 20:37:09.843878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.877 [2024-11-18 20:37:09.844114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.877 [2024-11-18 20:37:09.844317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.877 [2024-11-18 20:37:09.844337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.877 [2024-11-18 20:37:09.844349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.877 [2024-11-18 20:37:09.844361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.877 [2024-11-18 20:37:09.856562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.877 [2024-11-18 20:37:09.856999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.877 [2024-11-18 20:37:09.857027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.877 [2024-11-18 20:37:09.857044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.877 [2024-11-18 20:37:09.857285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.877 [2024-11-18 20:37:09.857488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.877 [2024-11-18 20:37:09.857507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.877 [2024-11-18 20:37:09.857520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.877 [2024-11-18 20:37:09.857532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:57.877 [2024-11-18 20:37:09.869601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:57.877 [2024-11-18 20:37:09.870012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.877 [2024-11-18 20:37:09.870040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:57.877 [2024-11-18 20:37:09.870057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:57.877 [2024-11-18 20:37:09.870295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:57.877 [2024-11-18 20:37:09.870498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:57.877 [2024-11-18 20:37:09.870517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:57.878 [2024-11-18 20:37:09.870529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:57.878 [2024-11-18 20:37:09.870542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.137 [2024-11-18 20:37:09.883268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.137 [2024-11-18 20:37:09.883702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.137 [2024-11-18 20:37:09.883733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.137 [2024-11-18 20:37:09.883751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.137 [2024-11-18 20:37:09.883987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.137 [2024-11-18 20:37:09.884192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.137 [2024-11-18 20:37:09.884211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.137 [2024-11-18 20:37:09.884224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.137 [2024-11-18 20:37:09.884236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.137 [2024-11-18 20:37:09.896271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.137 [2024-11-18 20:37:09.896625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.137 [2024-11-18 20:37:09.896677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.137 [2024-11-18 20:37:09.896694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.137 [2024-11-18 20:37:09.896918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.137 [2024-11-18 20:37:09.897123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.137 [2024-11-18 20:37:09.897148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.137 [2024-11-18 20:37:09.897162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.137 [2024-11-18 20:37:09.897174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.137 [2024-11-18 20:37:09.909371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.137 [2024-11-18 20:37:09.909713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.138 [2024-11-18 20:37:09.909741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.138 [2024-11-18 20:37:09.909757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.138 [2024-11-18 20:37:09.909967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.138 [2024-11-18 20:37:09.910170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.138 [2024-11-18 20:37:09.910189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.138 [2024-11-18 20:37:09.910202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.138 [2024-11-18 20:37:09.910214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.138 [2024-11-18 20:37:09.922432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.138 [2024-11-18 20:37:09.922759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.138 [2024-11-18 20:37:09.922786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.138 [2024-11-18 20:37:09.922803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.138 [2024-11-18 20:37:09.923018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.138 [2024-11-18 20:37:09.923222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.138 [2024-11-18 20:37:09.923240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.138 [2024-11-18 20:37:09.923254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.138 [2024-11-18 20:37:09.923266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.138 [2024-11-18 20:37:09.935419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.138 [2024-11-18 20:37:09.935764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.138 [2024-11-18 20:37:09.935792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.138 [2024-11-18 20:37:09.935808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.138 [2024-11-18 20:37:09.936024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.138 [2024-11-18 20:37:09.936229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.138 [2024-11-18 20:37:09.936247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.138 [2024-11-18 20:37:09.936260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.138 [2024-11-18 20:37:09.936277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.138 [2024-11-18 20:37:09.948406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.138 [2024-11-18 20:37:09.948778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.138 [2024-11-18 20:37:09.948806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.138 [2024-11-18 20:37:09.948822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.138 [2024-11-18 20:37:09.949039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.138 [2024-11-18 20:37:09.949244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.138 [2024-11-18 20:37:09.949263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.138 [2024-11-18 20:37:09.949275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.138 [2024-11-18 20:37:09.949287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.138 [2024-11-18 20:37:09.961448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.138 [2024-11-18 20:37:09.961858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.138 [2024-11-18 20:37:09.961888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.138 [2024-11-18 20:37:09.961904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.138 [2024-11-18 20:37:09.962139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.138 [2024-11-18 20:37:09.962342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.138 [2024-11-18 20:37:09.962361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.138 [2024-11-18 20:37:09.962374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.138 [2024-11-18 20:37:09.962385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.138 [2024-11-18 20:37:09.974424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.138 [2024-11-18 20:37:09.974800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.138 [2024-11-18 20:37:09.974829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.138 [2024-11-18 20:37:09.974845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.138 [2024-11-18 20:37:09.975061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.138 [2024-11-18 20:37:09.975264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.138 [2024-11-18 20:37:09.975284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.138 [2024-11-18 20:37:09.975296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.138 [2024-11-18 20:37:09.975308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.138 [2024-11-18 20:37:09.987455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.138 [2024-11-18 20:37:09.987806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.138 [2024-11-18 20:37:09.987839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.138 [2024-11-18 20:37:09.987856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.138 [2024-11-18 20:37:09.988091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.138 [2024-11-18 20:37:09.988295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.138 [2024-11-18 20:37:09.988314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.138 [2024-11-18 20:37:09.988328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.138 [2024-11-18 20:37:09.988340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.138 [2024-11-18 20:37:10.000462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.138 [2024-11-18 20:37:10.000912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.138 [2024-11-18 20:37:10.000954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.138 [2024-11-18 20:37:10.000979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.138 [2024-11-18 20:37:10.001239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.138 [2024-11-18 20:37:10.001511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.138 [2024-11-18 20:37:10.001543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.138 [2024-11-18 20:37:10.001566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.138 [2024-11-18 20:37:10.001586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.138 [2024-11-18 20:37:10.013918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.138 [2024-11-18 20:37:10.014404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.138 [2024-11-18 20:37:10.014437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.138 [2024-11-18 20:37:10.014455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.138 [2024-11-18 20:37:10.014714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.138 [2024-11-18 20:37:10.014936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.139 [2024-11-18 20:37:10.014956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.139 [2024-11-18 20:37:10.014970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.139 [2024-11-18 20:37:10.014998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.139 [2024-11-18 20:37:10.027489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.139 [2024-11-18 20:37:10.027954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.139 [2024-11-18 20:37:10.027987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.139 [2024-11-18 20:37:10.028006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.139 [2024-11-18 20:37:10.028260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.139 [2024-11-18 20:37:10.028477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.139 [2024-11-18 20:37:10.028513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.139 [2024-11-18 20:37:10.028529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.139 [2024-11-18 20:37:10.028543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.139 [2024-11-18 20:37:10.041159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.139 [2024-11-18 20:37:10.041515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.139 [2024-11-18 20:37:10.041545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.139 [2024-11-18 20:37:10.041563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.139 [2024-11-18 20:37:10.041817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.139 [2024-11-18 20:37:10.042065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.139 [2024-11-18 20:37:10.042087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.139 [2024-11-18 20:37:10.042103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.139 [2024-11-18 20:37:10.042124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.139 [2024-11-18 20:37:10.054536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.139 [2024-11-18 20:37:10.054964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.139 [2024-11-18 20:37:10.054996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.139 [2024-11-18 20:37:10.055013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.139 [2024-11-18 20:37:10.055243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.139 [2024-11-18 20:37:10.055454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.139 [2024-11-18 20:37:10.055473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.139 [2024-11-18 20:37:10.055486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.139 [2024-11-18 20:37:10.055499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.139 [2024-11-18 20:37:10.067678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.139 [2024-11-18 20:37:10.068102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.139 [2024-11-18 20:37:10.068148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.139 [2024-11-18 20:37:10.068165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.139 [2024-11-18 20:37:10.068401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.139 [2024-11-18 20:37:10.068603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.139 [2024-11-18 20:37:10.068628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.139 [2024-11-18 20:37:10.068668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.139 [2024-11-18 20:37:10.068682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.139 [2024-11-18 20:37:10.080964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.139 [2024-11-18 20:37:10.081303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.139 [2024-11-18 20:37:10.081332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.139 [2024-11-18 20:37:10.081348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.139 [2024-11-18 20:37:10.081565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.139 [2024-11-18 20:37:10.081805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.139 [2024-11-18 20:37:10.081827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.139 [2024-11-18 20:37:10.081841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.139 [2024-11-18 20:37:10.081854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.139 [2024-11-18 20:37:10.094261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.139 [2024-11-18 20:37:10.094710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.139 [2024-11-18 20:37:10.094742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.139 [2024-11-18 20:37:10.094759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.139 [2024-11-18 20:37:10.095001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.139 [2024-11-18 20:37:10.095223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.139 [2024-11-18 20:37:10.095244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.139 [2024-11-18 20:37:10.095257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.139 [2024-11-18 20:37:10.095270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.139 [2024-11-18 20:37:10.107443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.139 [2024-11-18 20:37:10.107795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.139 [2024-11-18 20:37:10.107823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.139 [2024-11-18 20:37:10.107839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.139 [2024-11-18 20:37:10.108055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.139 [2024-11-18 20:37:10.108300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.139 [2024-11-18 20:37:10.108321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.139 [2024-11-18 20:37:10.108336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.139 [2024-11-18 20:37:10.108349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.139 [2024-11-18 20:37:10.120805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.139 [2024-11-18 20:37:10.121137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.139 [2024-11-18 20:37:10.121165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.139 [2024-11-18 20:37:10.121196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.139 [2024-11-18 20:37:10.121427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.139 [2024-11-18 20:37:10.121674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.139 [2024-11-18 20:37:10.121709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.139 [2024-11-18 20:37:10.121724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.140 [2024-11-18 20:37:10.121737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.140 [2024-11-18 20:37:10.134201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.140 [2024-11-18 20:37:10.134547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.140 [2024-11-18 20:37:10.134574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.140 [2024-11-18 20:37:10.134591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.140 [2024-11-18 20:37:10.134845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.140 [2024-11-18 20:37:10.135090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.140 [2024-11-18 20:37:10.135110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.140 [2024-11-18 20:37:10.135122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.140 [2024-11-18 20:37:10.135134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.399 [2024-11-18 20:37:10.148014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.399 [2024-11-18 20:37:10.148375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.399 [2024-11-18 20:37:10.148406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.399 [2024-11-18 20:37:10.148424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.399 [2024-11-18 20:37:10.148664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.399 [2024-11-18 20:37:10.148937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.399 [2024-11-18 20:37:10.148961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.399 [2024-11-18 20:37:10.148989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.399 [2024-11-18 20:37:10.149003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.399 [2024-11-18 20:37:10.161366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.399 [2024-11-18 20:37:10.161754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.399 [2024-11-18 20:37:10.161791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.399 [2024-11-18 20:37:10.161809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.399 [2024-11-18 20:37:10.162050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.399 [2024-11-18 20:37:10.162255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.399 [2024-11-18 20:37:10.162275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.399 [2024-11-18 20:37:10.162287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.399 [2024-11-18 20:37:10.162300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.399 [2024-11-18 20:37:10.174721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.399 [2024-11-18 20:37:10.175046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.399 [2024-11-18 20:37:10.175074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.399 [2024-11-18 20:37:10.175090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.399 [2024-11-18 20:37:10.175306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.399 [2024-11-18 20:37:10.175538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.399 [2024-11-18 20:37:10.175558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.399 [2024-11-18 20:37:10.175573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.399 [2024-11-18 20:37:10.175586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.399 [2024-11-18 20:37:10.188231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.399 [2024-11-18 20:37:10.188582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.399 [2024-11-18 20:37:10.188627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.399 [2024-11-18 20:37:10.188654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.399 [2024-11-18 20:37:10.188870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.400 [2024-11-18 20:37:10.189101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.400 [2024-11-18 20:37:10.189121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.400 [2024-11-18 20:37:10.189134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.400 [2024-11-18 20:37:10.189147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.400 [2024-11-18 20:37:10.201819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.400 [2024-11-18 20:37:10.202222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.400 [2024-11-18 20:37:10.202249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.400 [2024-11-18 20:37:10.202276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.400 [2024-11-18 20:37:10.202508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.400 [2024-11-18 20:37:10.202746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.400 [2024-11-18 20:37:10.202783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.400 [2024-11-18 20:37:10.202799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.400 [2024-11-18 20:37:10.202813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.400 [2024-11-18 20:37:10.215251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.400 [2024-11-18 20:37:10.215679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.400 [2024-11-18 20:37:10.215709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.400 [2024-11-18 20:37:10.215728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.400 [2024-11-18 20:37:10.215969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.400 [2024-11-18 20:37:10.216172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.400 [2024-11-18 20:37:10.216190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.400 [2024-11-18 20:37:10.216203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.400 [2024-11-18 20:37:10.216215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.400 [2024-11-18 20:37:10.228568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.400 [2024-11-18 20:37:10.228903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.400 [2024-11-18 20:37:10.228947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.400 [2024-11-18 20:37:10.228964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.400 [2024-11-18 20:37:10.229190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.400 [2024-11-18 20:37:10.229395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.400 [2024-11-18 20:37:10.229414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.400 [2024-11-18 20:37:10.229427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.400 [2024-11-18 20:37:10.229439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.400 [2024-11-18 20:37:10.242123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.400 [2024-11-18 20:37:10.242507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.400 [2024-11-18 20:37:10.242545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.400 [2024-11-18 20:37:10.242561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.400 [2024-11-18 20:37:10.242846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.400 [2024-11-18 20:37:10.243086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.400 [2024-11-18 20:37:10.243106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.400 [2024-11-18 20:37:10.243140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.400 [2024-11-18 20:37:10.243154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.400 [2024-11-18 20:37:10.255625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.400 [2024-11-18 20:37:10.256035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.400 [2024-11-18 20:37:10.256085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.400 [2024-11-18 20:37:10.256111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.400 [2024-11-18 20:37:10.256363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.400 [2024-11-18 20:37:10.256578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.400 [2024-11-18 20:37:10.256598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.400 [2024-11-18 20:37:10.256612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.400 [2024-11-18 20:37:10.256656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.400 [2024-11-18 20:37:10.269147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.400 [2024-11-18 20:37:10.269496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.400 [2024-11-18 20:37:10.269529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.400 [2024-11-18 20:37:10.269547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.400 [2024-11-18 20:37:10.269817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.400 [2024-11-18 20:37:10.270064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.400 [2024-11-18 20:37:10.270099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.400 [2024-11-18 20:37:10.270114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.400 [2024-11-18 20:37:10.270127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.400 [2024-11-18 20:37:10.282811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.400 [2024-11-18 20:37:10.283225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.400 [2024-11-18 20:37:10.283280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.400 [2024-11-18 20:37:10.283297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.400 [2024-11-18 20:37:10.283538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.400 [2024-11-18 20:37:10.283799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.400 [2024-11-18 20:37:10.283822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.400 [2024-11-18 20:37:10.283838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.400 [2024-11-18 20:37:10.283852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.400 [2024-11-18 20:37:10.296274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.400 [2024-11-18 20:37:10.296775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.400 [2024-11-18 20:37:10.296805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.400 [2024-11-18 20:37:10.296822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.400 [2024-11-18 20:37:10.297066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.400 [2024-11-18 20:37:10.297265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.400 [2024-11-18 20:37:10.297299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.400 [2024-11-18 20:37:10.297313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.400 [2024-11-18 20:37:10.297326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.401 [2024-11-18 20:37:10.309595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.401 [2024-11-18 20:37:10.310016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.401 [2024-11-18 20:37:10.310052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.401 [2024-11-18 20:37:10.310069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.401 [2024-11-18 20:37:10.310303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.401 [2024-11-18 20:37:10.310524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.401 [2024-11-18 20:37:10.310543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.401 [2024-11-18 20:37:10.310556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.401 [2024-11-18 20:37:10.310568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.401 [2024-11-18 20:37:10.323306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.401 [2024-11-18 20:37:10.323739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.401 [2024-11-18 20:37:10.323769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.401 [2024-11-18 20:37:10.323788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.401 [2024-11-18 20:37:10.324018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.401 [2024-11-18 20:37:10.324250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.401 [2024-11-18 20:37:10.324270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.401 [2024-11-18 20:37:10.324299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.401 [2024-11-18 20:37:10.324314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.401 [2024-11-18 20:37:10.337098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.401 [2024-11-18 20:37:10.337459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.401 [2024-11-18 20:37:10.337507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.401 [2024-11-18 20:37:10.337524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.401 [2024-11-18 20:37:10.337765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.401 [2024-11-18 20:37:10.337992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.401 [2024-11-18 20:37:10.338011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.401 [2024-11-18 20:37:10.338024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.401 [2024-11-18 20:37:10.338036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.401 [2024-11-18 20:37:10.350497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.401 4547.00 IOPS, 17.76 MiB/s [2024-11-18T19:37:10.409Z] [2024-11-18 20:37:10.352419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.401 [2024-11-18 20:37:10.352449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.401 [2024-11-18 20:37:10.352466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.401 [2024-11-18 20:37:10.352698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.401 [2024-11-18 20:37:10.352961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.401 [2024-11-18 20:37:10.352983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.401 [2024-11-18 20:37:10.353013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.401 [2024-11-18 20:37:10.353027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.401 [2024-11-18 20:37:10.364010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.401 [2024-11-18 20:37:10.364410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.401 [2024-11-18 20:37:10.364500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.401 [2024-11-18 20:37:10.364517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.401 [2024-11-18 20:37:10.364765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.401 [2024-11-18 20:37:10.365005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.401 [2024-11-18 20:37:10.365026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.401 [2024-11-18 20:37:10.365040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.401 [2024-11-18 20:37:10.365053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.401 [2024-11-18 20:37:10.377380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.401 [2024-11-18 20:37:10.377800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.401 [2024-11-18 20:37:10.377828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.401 [2024-11-18 20:37:10.377850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.401 [2024-11-18 20:37:10.378084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.401 [2024-11-18 20:37:10.378314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.401 [2024-11-18 20:37:10.378333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.401 [2024-11-18 20:37:10.378346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.401 [2024-11-18 20:37:10.378358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.401 [2024-11-18 20:37:10.390742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.401 [2024-11-18 20:37:10.391169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.401 [2024-11-18 20:37:10.391213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.401 [2024-11-18 20:37:10.391230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.401 [2024-11-18 20:37:10.391476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.401 [2024-11-18 20:37:10.391706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.401 [2024-11-18 20:37:10.391728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.401 [2024-11-18 20:37:10.391742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.401 [2024-11-18 20:37:10.391755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.401 [2024-11-18 20:37:10.404509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.401 [2024-11-18 20:37:10.404907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.401 [2024-11-18 20:37:10.404945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.401 [2024-11-18 20:37:10.404963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.401 [2024-11-18 20:37:10.405210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.401 [2024-11-18 20:37:10.405458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.401 [2024-11-18 20:37:10.405489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.401 [2024-11-18 20:37:10.405512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.401 [2024-11-18 20:37:10.405531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.661 [2024-11-18 20:37:10.417902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.661 [2024-11-18 20:37:10.418360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.661 [2024-11-18 20:37:10.418390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.661 [2024-11-18 20:37:10.418407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.661 [2024-11-18 20:37:10.418652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.661 [2024-11-18 20:37:10.418873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.661 [2024-11-18 20:37:10.418894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.661 [2024-11-18 20:37:10.418912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.661 [2024-11-18 20:37:10.418926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.661 [2024-11-18 20:37:10.431327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.661 [2024-11-18 20:37:10.431749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.661 [2024-11-18 20:37:10.431778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.661 [2024-11-18 20:37:10.431794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.661 [2024-11-18 20:37:10.432037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.661 [2024-11-18 20:37:10.432265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.661 [2024-11-18 20:37:10.432285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.661 [2024-11-18 20:37:10.432300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.661 [2024-11-18 20:37:10.432313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.661 [2024-11-18 20:37:10.444536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.661 [2024-11-18 20:37:10.444977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.661 [2024-11-18 20:37:10.445021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.661 [2024-11-18 20:37:10.445038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.661 [2024-11-18 20:37:10.445273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.661 [2024-11-18 20:37:10.445494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.661 [2024-11-18 20:37:10.445514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.661 [2024-11-18 20:37:10.445526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.661 [2024-11-18 20:37:10.445539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.661 [2024-11-18 20:37:10.457971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.661 [2024-11-18 20:37:10.458316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.661 [2024-11-18 20:37:10.458345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.661 [2024-11-18 20:37:10.458362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.661 [2024-11-18 20:37:10.458599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.661 [2024-11-18 20:37:10.458833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.661 [2024-11-18 20:37:10.458854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.661 [2024-11-18 20:37:10.458867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.661 [2024-11-18 20:37:10.458880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.661 [2024-11-18 20:37:10.471288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.661 [2024-11-18 20:37:10.471597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.661 [2024-11-18 20:37:10.471624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.661 [2024-11-18 20:37:10.471664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.661 [2024-11-18 20:37:10.471910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.661 [2024-11-18 20:37:10.472156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.661 [2024-11-18 20:37:10.472176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.661 [2024-11-18 20:37:10.472189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.661 [2024-11-18 20:37:10.472201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.661 [2024-11-18 20:37:10.484440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.661 [2024-11-18 20:37:10.484849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.661 [2024-11-18 20:37:10.484878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.661 [2024-11-18 20:37:10.484893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.661 [2024-11-18 20:37:10.485109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.661 [2024-11-18 20:37:10.485312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.661 [2024-11-18 20:37:10.485331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.661 [2024-11-18 20:37:10.485344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.661 [2024-11-18 20:37:10.485356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.661 [2024-11-18 20:37:10.497700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.661 [2024-11-18 20:37:10.498099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.661 [2024-11-18 20:37:10.498125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.661 [2024-11-18 20:37:10.498141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.661 [2024-11-18 20:37:10.498375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.661 [2024-11-18 20:37:10.498579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.661 [2024-11-18 20:37:10.498598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.661 [2024-11-18 20:37:10.498611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.661 [2024-11-18 20:37:10.498647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.661 [2024-11-18 20:37:10.510964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.661 [2024-11-18 20:37:10.511272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.661 [2024-11-18 20:37:10.511299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.662 [2024-11-18 20:37:10.511320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.662 [2024-11-18 20:37:10.511537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.662 [2024-11-18 20:37:10.511785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.662 [2024-11-18 20:37:10.511806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.662 [2024-11-18 20:37:10.511819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.662 [2024-11-18 20:37:10.511832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.662 [2024-11-18 20:37:10.524141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.662 [2024-11-18 20:37:10.524453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.662 [2024-11-18 20:37:10.524480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.662 [2024-11-18 20:37:10.524497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.662 [2024-11-18 20:37:10.524744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.662 [2024-11-18 20:37:10.525012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.662 [2024-11-18 20:37:10.525033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.662 [2024-11-18 20:37:10.525047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.662 [2024-11-18 20:37:10.525060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.662 [2024-11-18 20:37:10.537573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.662 [2024-11-18 20:37:10.537947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.662 [2024-11-18 20:37:10.537993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.662 [2024-11-18 20:37:10.538010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.662 [2024-11-18 20:37:10.538243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.662 [2024-11-18 20:37:10.538448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.662 [2024-11-18 20:37:10.538468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.662 [2024-11-18 20:37:10.538482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.662 [2024-11-18 20:37:10.538495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.662 [2024-11-18 20:37:10.551072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.662 [2024-11-18 20:37:10.551538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.662 [2024-11-18 20:37:10.551591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.662 [2024-11-18 20:37:10.551609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.662 [2024-11-18 20:37:10.551851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.662 [2024-11-18 20:37:10.552105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.662 [2024-11-18 20:37:10.552128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.662 [2024-11-18 20:37:10.552142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.662 [2024-11-18 20:37:10.552157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.662 [2024-11-18 20:37:10.564452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.662 [2024-11-18 20:37:10.564891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.662 [2024-11-18 20:37:10.564923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.662 [2024-11-18 20:37:10.564941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.662 [2024-11-18 20:37:10.565194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.662 [2024-11-18 20:37:10.565398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.662 [2024-11-18 20:37:10.565418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.662 [2024-11-18 20:37:10.565431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.662 [2024-11-18 20:37:10.565445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.662 [2024-11-18 20:37:10.577758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.662 [2024-11-18 20:37:10.578147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.662 [2024-11-18 20:37:10.578199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.662 [2024-11-18 20:37:10.578216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.662 [2024-11-18 20:37:10.578460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.662 [2024-11-18 20:37:10.578674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.662 [2024-11-18 20:37:10.578695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.662 [2024-11-18 20:37:10.578709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.662 [2024-11-18 20:37:10.578723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.662 [2024-11-18 20:37:10.591124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.662 [2024-11-18 20:37:10.591540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.662 [2024-11-18 20:37:10.591632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.662 [2024-11-18 20:37:10.591661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.662 [2024-11-18 20:37:10.591915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.662 [2024-11-18 20:37:10.592161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.662 [2024-11-18 20:37:10.592183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.662 [2024-11-18 20:37:10.592202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.662 [2024-11-18 20:37:10.592217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.662 [2024-11-18 20:37:10.604485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.662 [2024-11-18 20:37:10.604808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.662 [2024-11-18 20:37:10.604849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.662 [2024-11-18 20:37:10.604883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.662 [2024-11-18 20:37:10.605129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.662 [2024-11-18 20:37:10.605336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.662 [2024-11-18 20:37:10.605357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.662 [2024-11-18 20:37:10.605370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.662 [2024-11-18 20:37:10.605383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.662 [2024-11-18 20:37:10.617844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.662 [2024-11-18 20:37:10.618253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.663 [2024-11-18 20:37:10.618281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.663 [2024-11-18 20:37:10.618298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.663 [2024-11-18 20:37:10.618533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.663 [2024-11-18 20:37:10.618784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.663 [2024-11-18 20:37:10.618804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.663 [2024-11-18 20:37:10.618818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.663 [2024-11-18 20:37:10.618831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.663 [2024-11-18 20:37:10.631007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.663 [2024-11-18 20:37:10.631322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.663 [2024-11-18 20:37:10.631351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.663 [2024-11-18 20:37:10.631367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.663 [2024-11-18 20:37:10.631564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.663 [2024-11-18 20:37:10.631797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.663 [2024-11-18 20:37:10.631818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.663 [2024-11-18 20:37:10.631831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.663 [2024-11-18 20:37:10.631844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.663 [2024-11-18 20:37:10.644138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.663 [2024-11-18 20:37:10.644509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.663 [2024-11-18 20:37:10.644558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.663 [2024-11-18 20:37:10.644575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.663 [2024-11-18 20:37:10.644821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.663 [2024-11-18 20:37:10.645059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.663 [2024-11-18 20:37:10.645079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.663 [2024-11-18 20:37:10.645093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.663 [2024-11-18 20:37:10.645120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.663 [2024-11-18 20:37:10.657556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.663 [2024-11-18 20:37:10.657956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.663 [2024-11-18 20:37:10.658009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.663 [2024-11-18 20:37:10.658025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.663 [2024-11-18 20:37:10.658281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.663 [2024-11-18 20:37:10.658506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.663 [2024-11-18 20:37:10.658527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.663 [2024-11-18 20:37:10.658541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.663 [2024-11-18 20:37:10.658569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.922 [2024-11-18 20:37:10.671046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.922 [2024-11-18 20:37:10.671436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.922 [2024-11-18 20:37:10.671467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.922 [2024-11-18 20:37:10.671485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.922 [2024-11-18 20:37:10.671740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.922 [2024-11-18 20:37:10.671961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.922 [2024-11-18 20:37:10.671986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.922 [2024-11-18 20:37:10.672023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.922 [2024-11-18 20:37:10.672045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.922 [2024-11-18 20:37:10.684500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.922 [2024-11-18 20:37:10.684913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.922 [2024-11-18 20:37:10.684959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.922 [2024-11-18 20:37:10.684982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.922 [2024-11-18 20:37:10.685214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.922 [2024-11-18 20:37:10.685432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.922 [2024-11-18 20:37:10.685453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.922 [2024-11-18 20:37:10.685482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.922 [2024-11-18 20:37:10.685496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.922 [2024-11-18 20:37:10.698194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.922 [2024-11-18 20:37:10.698558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.922 [2024-11-18 20:37:10.698588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.922 [2024-11-18 20:37:10.698622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.922 [2024-11-18 20:37:10.698849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.923 [2024-11-18 20:37:10.699098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.923 [2024-11-18 20:37:10.699119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.923 [2024-11-18 20:37:10.699132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.923 [2024-11-18 20:37:10.699147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.923 [2024-11-18 20:37:10.711723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.923 [2024-11-18 20:37:10.712075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.923 [2024-11-18 20:37:10.712105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.923 [2024-11-18 20:37:10.712123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.923 [2024-11-18 20:37:10.712363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.923 [2024-11-18 20:37:10.712606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.923 [2024-11-18 20:37:10.712625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.923 [2024-11-18 20:37:10.712646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.923 [2024-11-18 20:37:10.712677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.923 [2024-11-18 20:37:10.725212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.923 [2024-11-18 20:37:10.725556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.923 [2024-11-18 20:37:10.725586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.923 [2024-11-18 20:37:10.725603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.923 [2024-11-18 20:37:10.725845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.923 [2024-11-18 20:37:10.726093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.923 [2024-11-18 20:37:10.726114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.923 [2024-11-18 20:37:10.726149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.923 [2024-11-18 20:37:10.726162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.923 [2024-11-18 20:37:10.738615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.923 [2024-11-18 20:37:10.739079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.923 [2024-11-18 20:37:10.739133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.923 [2024-11-18 20:37:10.739150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.923 [2024-11-18 20:37:10.739405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.923 [2024-11-18 20:37:10.739592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.923 [2024-11-18 20:37:10.739611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.923 [2024-11-18 20:37:10.739650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.923 [2024-11-18 20:37:10.739667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.923 [2024-11-18 20:37:10.752047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.923 [2024-11-18 20:37:10.752361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.923 [2024-11-18 20:37:10.752401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.923 [2024-11-18 20:37:10.752435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.923 [2024-11-18 20:37:10.752675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.923 [2024-11-18 20:37:10.752881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.923 [2024-11-18 20:37:10.752903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.923 [2024-11-18 20:37:10.752933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.923 [2024-11-18 20:37:10.752947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.923 [2024-11-18 20:37:10.765027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.923 [2024-11-18 20:37:10.765372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.923 [2024-11-18 20:37:10.765401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.923 [2024-11-18 20:37:10.765418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.923 [2024-11-18 20:37:10.765663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.923 [2024-11-18 20:37:10.765877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.923 [2024-11-18 20:37:10.765896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.923 [2024-11-18 20:37:10.765916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.923 [2024-11-18 20:37:10.765943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.923 [2024-11-18 20:37:10.777990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.923 [2024-11-18 20:37:10.778334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.923 [2024-11-18 20:37:10.778361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.923 [2024-11-18 20:37:10.778377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.923 [2024-11-18 20:37:10.778592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.923 [2024-11-18 20:37:10.778824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.923 [2024-11-18 20:37:10.778845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.923 [2024-11-18 20:37:10.778858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.923 [2024-11-18 20:37:10.778870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.923 [2024-11-18 20:37:10.791122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.923 [2024-11-18 20:37:10.791528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.923 [2024-11-18 20:37:10.791557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.923 [2024-11-18 20:37:10.791574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.923 [2024-11-18 20:37:10.791821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.923 [2024-11-18 20:37:10.792027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.923 [2024-11-18 20:37:10.792048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.923 [2024-11-18 20:37:10.792061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.923 [2024-11-18 20:37:10.792073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.923 [2024-11-18 20:37:10.804147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.923 [2024-11-18 20:37:10.804537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.923 [2024-11-18 20:37:10.804587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.923 [2024-11-18 20:37:10.804604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.923 [2024-11-18 20:37:10.804886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.923 [2024-11-18 20:37:10.805108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.924 [2024-11-18 20:37:10.805128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.924 [2024-11-18 20:37:10.805142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.924 [2024-11-18 20:37:10.805154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.924 [2024-11-18 20:37:10.817540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.924 [2024-11-18 20:37:10.817925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.924 [2024-11-18 20:37:10.817971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.924 [2024-11-18 20:37:10.817989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.924 [2024-11-18 20:37:10.818225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.924 [2024-11-18 20:37:10.818429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.924 [2024-11-18 20:37:10.818449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.924 [2024-11-18 20:37:10.818463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.924 [2024-11-18 20:37:10.818475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.924 [2024-11-18 20:37:10.830658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.924 [2024-11-18 20:37:10.830969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.924 [2024-11-18 20:37:10.830997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.924 [2024-11-18 20:37:10.831014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.924 [2024-11-18 20:37:10.831231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.924 [2024-11-18 20:37:10.831435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.924 [2024-11-18 20:37:10.831455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.924 [2024-11-18 20:37:10.831468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.924 [2024-11-18 20:37:10.831481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.924 [2024-11-18 20:37:10.843820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.924 [2024-11-18 20:37:10.844148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.924 [2024-11-18 20:37:10.844175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.924 [2024-11-18 20:37:10.844191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.924 [2024-11-18 20:37:10.844401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.924 [2024-11-18 20:37:10.844604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.924 [2024-11-18 20:37:10.844649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.924 [2024-11-18 20:37:10.844666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.924 [2024-11-18 20:37:10.844693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.924 [2024-11-18 20:37:10.857123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.924 [2024-11-18 20:37:10.857535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.924 [2024-11-18 20:37:10.857587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.924 [2024-11-18 20:37:10.857607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.924 [2024-11-18 20:37:10.857845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.924 [2024-11-18 20:37:10.858068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.924 [2024-11-18 20:37:10.858088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.924 [2024-11-18 20:37:10.858101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.924 [2024-11-18 20:37:10.858114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.924 [2024-11-18 20:37:10.870185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.924 [2024-11-18 20:37:10.870577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.924 [2024-11-18 20:37:10.870621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.924 [2024-11-18 20:37:10.870645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.924 [2024-11-18 20:37:10.870896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.924 [2024-11-18 20:37:10.871116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.924 [2024-11-18 20:37:10.871136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.924 [2024-11-18 20:37:10.871150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.924 [2024-11-18 20:37:10.871162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.924 [2024-11-18 20:37:10.883219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.924 [2024-11-18 20:37:10.883610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.924 [2024-11-18 20:37:10.883671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.924 [2024-11-18 20:37:10.883688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.924 [2024-11-18 20:37:10.883932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.924 [2024-11-18 20:37:10.884119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.924 [2024-11-18 20:37:10.884139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.924 [2024-11-18 20:37:10.884152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.924 [2024-11-18 20:37:10.884164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.924 [2024-11-18 20:37:10.896324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.924 [2024-11-18 20:37:10.896732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.924 [2024-11-18 20:37:10.896761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.924 [2024-11-18 20:37:10.896777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.924 [2024-11-18 20:37:10.897012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.924 [2024-11-18 20:37:10.897219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.924 [2024-11-18 20:37:10.897239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.924 [2024-11-18 20:37:10.897252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.924 [2024-11-18 20:37:10.897265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.924 [2024-11-18 20:37:10.909372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.924 [2024-11-18 20:37:10.909777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.924 [2024-11-18 20:37:10.909806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.924 [2024-11-18 20:37:10.909822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.924 [2024-11-18 20:37:10.910060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.925 [2024-11-18 20:37:10.910247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.925 [2024-11-18 20:37:10.910267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.925 [2024-11-18 20:37:10.910280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.925 [2024-11-18 20:37:10.910292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:58.925 [2024-11-18 20:37:10.922503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:58.925 [2024-11-18 20:37:10.922856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.925 [2024-11-18 20:37:10.922886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:58.925 [2024-11-18 20:37:10.922903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:58.925 [2024-11-18 20:37:10.923139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:58.925 [2024-11-18 20:37:10.923343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:58.925 [2024-11-18 20:37:10.923363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:58.925 [2024-11-18 20:37:10.923377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:58.925 [2024-11-18 20:37:10.923390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.185 [2024-11-18 20:37:10.935950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.185 [2024-11-18 20:37:10.936310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-11-18 20:37:10.936340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.185 [2024-11-18 20:37:10.936357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.185 [2024-11-18 20:37:10.936572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.185 [2024-11-18 20:37:10.936825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.185 [2024-11-18 20:37:10.936846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.185 [2024-11-18 20:37:10.936865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.185 [2024-11-18 20:37:10.936879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.185 [2024-11-18 20:37:10.949071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.185 [2024-11-18 20:37:10.949425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-11-18 20:37:10.949455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.185 [2024-11-18 20:37:10.949473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.185 [2024-11-18 20:37:10.949724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.185 [2024-11-18 20:37:10.949923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.185 [2024-11-18 20:37:10.949958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.185 [2024-11-18 20:37:10.949971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.185 [2024-11-18 20:37:10.949984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.185 [2024-11-18 20:37:10.962240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.185 [2024-11-18 20:37:10.962585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-11-18 20:37:10.962629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.185 [2024-11-18 20:37:10.962657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.185 [2024-11-18 20:37:10.962919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.185 [2024-11-18 20:37:10.963124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.185 [2024-11-18 20:37:10.963144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.185 [2024-11-18 20:37:10.963157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.185 [2024-11-18 20:37:10.963169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.185 [2024-11-18 20:37:10.975293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.185 [2024-11-18 20:37:10.975644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-11-18 20:37:10.975677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.185 [2024-11-18 20:37:10.975694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.185 [2024-11-18 20:37:10.975929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.185 [2024-11-18 20:37:10.976131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.185 [2024-11-18 20:37:10.976151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.185 [2024-11-18 20:37:10.976164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.185 [2024-11-18 20:37:10.976176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.185 [2024-11-18 20:37:10.988386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.185 [2024-11-18 20:37:10.988797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-11-18 20:37:10.988826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.185 [2024-11-18 20:37:10.988843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.185 [2024-11-18 20:37:10.989079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.185 [2024-11-18 20:37:10.989282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.185 [2024-11-18 20:37:10.989302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.185 [2024-11-18 20:37:10.989315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.185 [2024-11-18 20:37:10.989328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.185 [2024-11-18 20:37:11.001430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.185 [2024-11-18 20:37:11.001784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-11-18 20:37:11.001812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.185 [2024-11-18 20:37:11.001829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.185 [2024-11-18 20:37:11.002044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.185 [2024-11-18 20:37:11.002246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.185 [2024-11-18 20:37:11.002266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.185 [2024-11-18 20:37:11.002280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.185 [2024-11-18 20:37:11.002292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 400365 Killed "${NVMF_APP[@]}" "$@" 00:35:59.185 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:59.186 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:59.186 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:59.186 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:59.186 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:59.186 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=401315 00:35:59.186 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:59.186 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 401315 00:35:59.186 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 401315 ']' 00:35:59.186 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:59.186 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:59.186 [2024-11-18 20:37:11.015016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.186 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:59.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:59.186 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:59.186 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:59.186 [2024-11-18 20:37:11.015375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-11-18 20:37:11.015406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.186 [2024-11-18 20:37:11.015424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.186 [2024-11-18 20:37:11.015681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.186 [2024-11-18 20:37:11.015901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.186 [2024-11-18 20:37:11.015939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.186 [2024-11-18 20:37:11.015954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.186 [2024-11-18 20:37:11.015968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.186 [2024-11-18 20:37:11.028387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.186 [2024-11-18 20:37:11.028758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-11-18 20:37:11.028787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.186 [2024-11-18 20:37:11.028805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.186 [2024-11-18 20:37:11.029061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.186 [2024-11-18 20:37:11.029255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.186 [2024-11-18 20:37:11.029275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.186 [2024-11-18 20:37:11.029288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.186 [2024-11-18 20:37:11.029300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.186 [2024-11-18 20:37:11.041818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.186 [2024-11-18 20:37:11.042237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-11-18 20:37:11.042265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.186 [2024-11-18 20:37:11.042281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.186 [2024-11-18 20:37:11.042504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.186 [2024-11-18 20:37:11.042748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.186 [2024-11-18 20:37:11.042769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.186 [2024-11-18 20:37:11.042784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.186 [2024-11-18 20:37:11.042797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.186 [2024-11-18 20:37:11.055140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.186 [2024-11-18 20:37:11.055541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-11-18 20:37:11.055576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.186 [2024-11-18 20:37:11.055595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.186 [2024-11-18 20:37:11.055826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.186 [2024-11-18 20:37:11.056072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.186 [2024-11-18 20:37:11.056092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.186 [2024-11-18 20:37:11.056106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.186 [2024-11-18 20:37:11.056119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.186 [2024-11-18 20:37:11.061261] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:59.186 [2024-11-18 20:37:11.061320] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:59.186 [2024-11-18 20:37:11.068422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.186 [2024-11-18 20:37:11.068749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-11-18 20:37:11.068796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.186 [2024-11-18 20:37:11.068815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.186 [2024-11-18 20:37:11.069056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.186 [2024-11-18 20:37:11.069265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.186 [2024-11-18 20:37:11.069284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.186 [2024-11-18 20:37:11.069298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.186 [2024-11-18 20:37:11.069310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.186 [2024-11-18 20:37:11.081820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.186 [2024-11-18 20:37:11.082216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-11-18 20:37:11.082247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.186 [2024-11-18 20:37:11.082265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.186 [2024-11-18 20:37:11.082510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.186 [2024-11-18 20:37:11.082746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.186 [2024-11-18 20:37:11.082767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.186 [2024-11-18 20:37:11.082780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.186 [2024-11-18 20:37:11.082793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.186 [2024-11-18 20:37:11.095066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.186 [2024-11-18 20:37:11.095391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-11-18 20:37:11.095424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.187 [2024-11-18 20:37:11.095442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.187 [2024-11-18 20:37:11.095691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.187 [2024-11-18 20:37:11.095891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.187 [2024-11-18 20:37:11.095911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.187 [2024-11-18 20:37:11.095925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.187 [2024-11-18 20:37:11.095937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.187 [2024-11-18 20:37:11.108477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.187 [2024-11-18 20:37:11.108918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-11-18 20:37:11.108948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.187 [2024-11-18 20:37:11.108965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.187 [2024-11-18 20:37:11.109201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.187 [2024-11-18 20:37:11.109410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.187 [2024-11-18 20:37:11.109429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.187 [2024-11-18 20:37:11.109442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.187 [2024-11-18 20:37:11.109454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.187 [2024-11-18 20:37:11.121800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.187 [2024-11-18 20:37:11.122191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-11-18 20:37:11.122219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.187 [2024-11-18 20:37:11.122236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.187 [2024-11-18 20:37:11.122472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.187 [2024-11-18 20:37:11.122708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.187 [2024-11-18 20:37:11.122730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.187 [2024-11-18 20:37:11.122743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.187 [2024-11-18 20:37:11.122756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.187 [2024-11-18 20:37:11.134251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:59.187 [2024-11-18 20:37:11.135121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.187 [2024-11-18 20:37:11.135537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-11-18 20:37:11.135565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.187 [2024-11-18 20:37:11.135583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.187 [2024-11-18 20:37:11.135842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.187 [2024-11-18 20:37:11.136055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.187 [2024-11-18 20:37:11.136075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.187 [2024-11-18 20:37:11.136089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.187 [2024-11-18 20:37:11.136101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.187 [2024-11-18 20:37:11.148383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.187 [2024-11-18 20:37:11.148967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-11-18 20:37:11.149021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.187 [2024-11-18 20:37:11.149043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.187 [2024-11-18 20:37:11.149290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.187 [2024-11-18 20:37:11.149512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.187 [2024-11-18 20:37:11.149533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.187 [2024-11-18 20:37:11.149550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.187 [2024-11-18 20:37:11.149566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.187 [2024-11-18 20:37:11.161706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.187 [2024-11-18 20:37:11.162087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-11-18 20:37:11.162116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.187 [2024-11-18 20:37:11.162133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.187 [2024-11-18 20:37:11.162357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.187 [2024-11-18 20:37:11.162568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.187 [2024-11-18 20:37:11.162588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.187 [2024-11-18 20:37:11.162602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.187 [2024-11-18 20:37:11.162615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.187 [2024-11-18 20:37:11.174907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.187 [2024-11-18 20:37:11.175249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-11-18 20:37:11.175292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.187 [2024-11-18 20:37:11.175310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.187 [2024-11-18 20:37:11.175528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.187 [2024-11-18 20:37:11.175768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.187 [2024-11-18 20:37:11.175800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.187 [2024-11-18 20:37:11.175816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.187 [2024-11-18 20:37:11.175829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.187 [2024-11-18 20:37:11.179888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:59.187 [2024-11-18 20:37:11.179934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:59.187 [2024-11-18 20:37:11.179947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:59.187 [2024-11-18 20:37:11.179958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:59.187 [2024-11-18 20:37:11.179967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:59.187 [2024-11-18 20:37:11.181293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:59.187 [2024-11-18 20:37:11.181357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:59.187 [2024-11-18 20:37:11.181360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:59.187 [2024-11-18 20:37:11.188539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.187 [2024-11-18 20:37:11.189040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-11-18 20:37:11.189088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.187 [2024-11-18 20:37:11.189112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.187 [2024-11-18 20:37:11.189350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.187 [2024-11-18 20:37:11.189586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.187 [2024-11-18 20:37:11.189611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.188 [2024-11-18 20:37:11.189629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.188 [2024-11-18 20:37:11.189661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.448 [2024-11-18 20:37:11.202152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.448 [2024-11-18 20:37:11.202648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.448 [2024-11-18 20:37:11.202688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.448 [2024-11-18 20:37:11.202710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.448 [2024-11-18 20:37:11.202952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.448 [2024-11-18 20:37:11.203179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.448 [2024-11-18 20:37:11.203201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.448 [2024-11-18 20:37:11.203219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.448 [2024-11-18 20:37:11.203236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.448 [2024-11-18 20:37:11.215759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.448 [2024-11-18 20:37:11.216252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.448 [2024-11-18 20:37:11.216302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.448 [2024-11-18 20:37:11.216324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.448 [2024-11-18 20:37:11.216567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.448 [2024-11-18 20:37:11.216827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.448 [2024-11-18 20:37:11.216849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.449 [2024-11-18 20:37:11.216867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.449 [2024-11-18 20:37:11.216885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.449 [2024-11-18 20:37:11.229274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.449 [2024-11-18 20:37:11.229760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.449 [2024-11-18 20:37:11.229801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.449 [2024-11-18 20:37:11.229823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.449 [2024-11-18 20:37:11.230064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.449 [2024-11-18 20:37:11.230276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.449 [2024-11-18 20:37:11.230297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.449 [2024-11-18 20:37:11.230315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.449 [2024-11-18 20:37:11.230331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.449 [2024-11-18 20:37:11.242861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.449 [2024-11-18 20:37:11.243303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.449 [2024-11-18 20:37:11.243340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.449 [2024-11-18 20:37:11.243361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.449 [2024-11-18 20:37:11.243605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.449 [2024-11-18 20:37:11.243860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.449 [2024-11-18 20:37:11.243884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.449 [2024-11-18 20:37:11.243901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.449 [2024-11-18 20:37:11.243918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.449 [2024-11-18 20:37:11.256474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.449 [2024-11-18 20:37:11.257011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.449 [2024-11-18 20:37:11.257051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.449 [2024-11-18 20:37:11.257073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.449 [2024-11-18 20:37:11.257318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.449 [2024-11-18 20:37:11.257537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.449 [2024-11-18 20:37:11.257573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.449 [2024-11-18 20:37:11.257592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.449 [2024-11-18 20:37:11.257609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.449 [2024-11-18 20:37:11.270127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.449 [2024-11-18 20:37:11.270459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.449 [2024-11-18 20:37:11.270489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.449 [2024-11-18 20:37:11.270507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.449 [2024-11-18 20:37:11.270747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.449 [2024-11-18 20:37:11.270975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.449 [2024-11-18 20:37:11.270996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.449 [2024-11-18 20:37:11.271010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.449 [2024-11-18 20:37:11.271023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.449 [2024-11-18 20:37:11.283518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.449 [2024-11-18 20:37:11.283871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.449 [2024-11-18 20:37:11.283901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.449 [2024-11-18 20:37:11.283918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.449 [2024-11-18 20:37:11.284148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.449 [2024-11-18 20:37:11.284370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.449 [2024-11-18 20:37:11.284391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.449 [2024-11-18 20:37:11.284405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.449 [2024-11-18 20:37:11.284418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.449 [2024-11-18 20:37:11.297055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.449 [2024-11-18 20:37:11.297370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.449 [2024-11-18 20:37:11.297399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.449 [2024-11-18 20:37:11.297417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.449 [2024-11-18 20:37:11.297633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.449 [2024-11-18 20:37:11.297863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.449 [2024-11-18 20:37:11.297885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.449 [2024-11-18 20:37:11.297906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.449 [2024-11-18 20:37:11.297938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.449 [2024-11-18 20:37:11.310587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.449 [2024-11-18 20:37:11.310930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.449 [2024-11-18 20:37:11.310959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.449 [2024-11-18 20:37:11.310977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.449 [2024-11-18 20:37:11.311210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.449 [2024-11-18 20:37:11.311454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.449 [2024-11-18 20:37:11.311482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.449 [2024-11-18 20:37:11.311499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.449 [2024-11-18 20:37:11.311513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.449 [2024-11-18 20:37:11.324028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.449 [2024-11-18 20:37:11.324339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.449 [2024-11-18 20:37:11.324384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.449 [2024-11-18 20:37:11.324403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.449 [2024-11-18 20:37:11.324633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.449 [2024-11-18 20:37:11.324854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.449 [2024-11-18 20:37:11.324877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.449 [2024-11-18 20:37:11.324892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.449 [2024-11-18 20:37:11.324905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.449 [2024-11-18 20:37:11.337588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.449 [2024-11-18 20:37:11.337912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.449 [2024-11-18 20:37:11.337942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.449 [2024-11-18 20:37:11.337960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.449 [2024-11-18 20:37:11.338190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.449 [2024-11-18 20:37:11.338402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.449 [2024-11-18 20:37:11.338423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.449 [2024-11-18 20:37:11.338438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.449 [2024-11-18 20:37:11.338452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:59.450 [2024-11-18 20:37:11.351283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.450 [2024-11-18 20:37:11.351634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.450 [2024-11-18 20:37:11.351672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.450 [2024-11-18 20:37:11.351691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.450 [2024-11-18 20:37:11.351908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.450 [2024-11-18 20:37:11.352135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.450 [2024-11-18 20:37:11.352157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.450 [2024-11-18 20:37:11.352186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.450 [2024-11-18 20:37:11.352200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.450 3789.17 IOPS, 14.80 MiB/s [2024-11-18T19:37:11.458Z] 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:59.450 [2024-11-18 20:37:11.364866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:59.450 [2024-11-18 20:37:11.365236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.450 [2024-11-18 20:37:11.365266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.450 [2024-11-18 20:37:11.365283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.450 [2024-11-18 20:37:11.365514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.450 [2024-11-18 20:37:11.365772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.450 [2024-11-18 20:37:11.365795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.450 [2024-11-18 20:37:11.365810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.450 [2024-11-18 20:37:11.365823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.450 [2024-11-18 20:37:11.368610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:59.450 [2024-11-18 20:37:11.378417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.450 [2024-11-18 20:37:11.378781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.450 [2024-11-18 20:37:11.378811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.450 [2024-11-18 20:37:11.378829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.450 [2024-11-18 20:37:11.379060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.450 [2024-11-18 20:37:11.379282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.450 [2024-11-18 20:37:11.379303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.450 [2024-11-18 20:37:11.379317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.450 [2024-11-18 20:37:11.379330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.450 [2024-11-18 20:37:11.392027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.450 [2024-11-18 20:37:11.392432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.450 [2024-11-18 20:37:11.392465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.450 [2024-11-18 20:37:11.392483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.450 [2024-11-18 20:37:11.392712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.450 [2024-11-18 20:37:11.392948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.450 [2024-11-18 20:37:11.392970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.450 [2024-11-18 20:37:11.393001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.450 [2024-11-18 20:37:11.393016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.450 Malloc0 00:35:59.450 [2024-11-18 20:37:11.405486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.450 [2024-11-18 20:37:11.405939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.450 [2024-11-18 20:37:11.405972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:59.450 [2024-11-18 20:37:11.405994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.450 [2024-11-18 20:37:11.406234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:59.450 [2024-11-18 20:37:11.406472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.450 [2024-11-18 20:37:11.406495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.450 [2024-11-18 20:37:11.406513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.450 [2024-11-18 20:37:11.406529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:59.450 [2024-11-18 20:37:11.419100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.450 [2024-11-18 20:37:11.419410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.450 [2024-11-18 20:37:11.419454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbecf0 with addr=10.0.0.2, port=4420 00:35:59.450 [2024-11-18 20:37:11.419471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbecf0 is same with the state(6) to be set 00:35:59.450 [2024-11-18 20:37:11.419712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbecf0 (9): Bad file descriptor 00:35:59.450 [2024-11-18 20:37:11.419943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:59.450 [2024-11-18 20:37:11.419979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:59.450 [2024-11-18 20:37:11.419993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:59.450 [2024-11-18 20:37:11.420007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.450 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:59.451 [2024-11-18 20:37:11.425219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.451 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.451 20:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 400653 00:35:59.451 [2024-11-18 20:37:11.432722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:59.709 [2024-11-18 20:37:11.616659] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:36:01.610 4159.14 IOPS, 16.25 MiB/s [2024-11-18T19:37:14.552Z] 4723.25 IOPS, 18.45 MiB/s [2024-11-18T19:37:15.491Z] 5189.44 IOPS, 20.27 MiB/s [2024-11-18T19:37:16.430Z] 5550.10 IOPS, 21.68 MiB/s [2024-11-18T19:37:17.809Z] 5835.55 IOPS, 22.80 MiB/s [2024-11-18T19:37:18.407Z] 6081.25 IOPS, 23.75 MiB/s [2024-11-18T19:37:19.783Z] 6290.08 IOPS, 24.57 MiB/s [2024-11-18T19:37:20.721Z] 6463.21 IOPS, 25.25 MiB/s 00:36:08.713 Latency(us) 00:36:08.713 [2024-11-18T19:37:20.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.713 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:08.713 Verification LBA range: start 0x0 length 0x4000 00:36:08.713 Nvme1n1 : 15.01 6601.41 25.79 10486.68 0.00 7467.87 898.09 17670.45 00:36:08.713 [2024-11-18T19:37:20.721Z] =================================================================================================================== 00:36:08.713 [2024-11-18T19:37:20.721Z] Total : 6601.41 25.79 10486.68 0.00 7467.87 898.09 17670.45 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:08.713 rmmod nvme_tcp 00:36:08.713 rmmod nvme_fabrics 00:36:08.713 rmmod nvme_keyring 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 401315 ']' 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 401315 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 401315 ']' 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 401315 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 401315 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 401315' 00:36:08.713 killing process with pid 401315 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 401315 00:36:08.713 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 401315 00:36:08.971 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:08.971 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:08.971 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:08.971 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:36:08.971 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:36:08.971 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:08.971 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:36:08.971 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:08.971 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:08.971 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.971 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:08.971 20:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.508 20:37:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:11.509 00:36:11.509 real 0m22.304s 00:36:11.509 user 1m0.026s 00:36:11.509 sys 0m4.034s 00:36:11.509 20:37:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:11.509 20:37:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:11.509 ************************************ 00:36:11.509 END TEST nvmf_bdevperf 00:36:11.509 ************************************ 00:36:11.509 20:37:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:11.509 20:37:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:11.509 20:37:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:11.509 20:37:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.509 ************************************ 00:36:11.509 START TEST nvmf_target_disconnect 00:36:11.509 ************************************ 00:36:11.509 20:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:11.509 * Looking for test storage... 00:36:11.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:11.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.509 --rc genhtml_branch_coverage=1 00:36:11.509 --rc genhtml_function_coverage=1 00:36:11.509 --rc genhtml_legend=1 00:36:11.509 --rc geninfo_all_blocks=1 00:36:11.509 --rc geninfo_unexecuted_blocks=1 00:36:11.509 00:36:11.509 ' 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:11.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.509 --rc genhtml_branch_coverage=1 00:36:11.509 --rc genhtml_function_coverage=1 00:36:11.509 --rc genhtml_legend=1 00:36:11.509 --rc geninfo_all_blocks=1 00:36:11.509 --rc geninfo_unexecuted_blocks=1 00:36:11.509 00:36:11.509 ' 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:11.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.509 --rc genhtml_branch_coverage=1 00:36:11.509 --rc genhtml_function_coverage=1 00:36:11.509 --rc genhtml_legend=1 00:36:11.509 --rc geninfo_all_blocks=1 00:36:11.509 --rc geninfo_unexecuted_blocks=1 00:36:11.509 00:36:11.509 ' 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:11.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.509 --rc genhtml_branch_coverage=1 00:36:11.509 --rc genhtml_function_coverage=1 00:36:11.509 --rc genhtml_legend=1 00:36:11.509 --rc geninfo_all_blocks=1 00:36:11.509 --rc geninfo_unexecuted_blocks=1 00:36:11.509 00:36:11.509 ' 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.509 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:11.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:11.510 20:37:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:13.416 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:13.416 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:13.416 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:13.416 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:13.416 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:13.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:13.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:36:13.417 00:36:13.417 --- 10.0.0.2 ping statistics --- 00:36:13.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.417 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:13.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:13.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:36:13.417 00:36:13.417 --- 10.0.0.1 ping statistics --- 00:36:13.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.417 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:13.417 ************************************ 00:36:13.417 START TEST nvmf_target_disconnect_tc1 00:36:13.417 ************************************ 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:13.417 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:13.676 [2024-11-18 20:37:25.429604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.676 [2024-11-18 20:37:25.429723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f8ca90 with addr=10.0.0.2, port=4420 00:36:13.676 [2024-11-18 20:37:25.429758] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:13.676 [2024-11-18 20:37:25.429781] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:13.676 [2024-11-18 20:37:25.429795] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:13.676 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:13.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:13.676 Initializing NVMe Controllers 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:13.676 00:36:13.676 real 0m0.091s 00:36:13.676 user 0m0.044s 00:36:13.676 sys 0m0.047s 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:13.676 ************************************ 00:36:13.676 END TEST nvmf_target_disconnect_tc1 00:36:13.676 ************************************ 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:13.676 ************************************ 00:36:13.676 START TEST nvmf_target_disconnect_tc2 00:36:13.676 ************************************ 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=404469 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 404469 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 404469 ']' 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:13.676 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.676 [2024-11-18 20:37:25.545541] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:36:13.676 [2024-11-18 20:37:25.545629] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:13.676 [2024-11-18 20:37:25.618778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:13.676 [2024-11-18 20:37:25.665470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:13.676 [2024-11-18 20:37:25.665522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:13.676 [2024-11-18 20:37:25.665545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:13.676 [2024-11-18 20:37:25.665556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:13.676 [2024-11-18 20:37:25.665565] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:13.676 [2024-11-18 20:37:25.667062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:13.676 [2024-11-18 20:37:25.667125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:13.676 [2024-11-18 20:37:25.667189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:13.676 [2024-11-18 20:37:25.667192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.936 Malloc0 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.936 [2024-11-18 20:37:25.843188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:13.936 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.937 [2024-11-18 20:37:25.871489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=404494 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:13.937 20:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:16.498 20:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 404469 00:36:16.498 20:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 [2024-11-18 20:37:27.896133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 [2024-11-18 20:37:27.896449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Write completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.498 Read completed with error (sct=0, sc=8) 00:36:16.498 starting I/O failed 00:36:16.499 [2024-11-18 20:37:27.896784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Write completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Write completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Write completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Write completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Write completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Write completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Write completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Write completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Write completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Write completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Write completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Read completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 Write completed with error (sct=0, sc=8) 00:36:16.499 starting I/O failed 00:36:16.499 [2024-11-18 20:37:27.897128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:16.499 [2024-11-18 20:37:27.897332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.897374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.897529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.897558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.897689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.897718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.897815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.897842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.897944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.897973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.898091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.898118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.898221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.898264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.898351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.898379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.898473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.898500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.898610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.898650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.898742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.898769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.898869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.898895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.899021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.899047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.899126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.899152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.899270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.899296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.899389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.899420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.899509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.899534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.899623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.899658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.899745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.899770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.899917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.899950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.900036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.900064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.900142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.900169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.900283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.900309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.900386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.900413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.900536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.900564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.499 [2024-11-18 20:37:27.900659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.499 [2024-11-18 20:37:27.900688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.499 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.900775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.900803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.900889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.900916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.901032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.901059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.901153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.901180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.901321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.901348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.901489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.901516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.901650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.901699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.901799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.901827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.901912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.901941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.902052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.902078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.902183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.902209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.902294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.902320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.902434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.902460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.902548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.902578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.902678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.902707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.902797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.902824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.902902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.902939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.903035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.903061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.903207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.903236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.903332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.903358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.903512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.903539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.903662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.903690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.903773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.903800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.903912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.903946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.904061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.904089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.904202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.904229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.904318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.904345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.904455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.904483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.904604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.904648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.904749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.904782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.904897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.904936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.905049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.905076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.905164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.905192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.905282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.905309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.905467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.905507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.905609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.905663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.905768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.500 [2024-11-18 20:37:27.905796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.500 qpair failed and we were unable to recover it. 00:36:16.500 [2024-11-18 20:37:27.905908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.905941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.906022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.906049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.906174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.906200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.906340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.906366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.906485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.906526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.906658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.906687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.906789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.906818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.906932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.906959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.907038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.907065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.907148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.907176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.907294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.907321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.907473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.907513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.907627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.907660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.907747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.907775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.907851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.907877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.907985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.908011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.908126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.908153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.908291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.908317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.908482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.908522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.908651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.908684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.908797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.908825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.908912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.908939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.909049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.909076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.909215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.909242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.909350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.909377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.909512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.909539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.909685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.909713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.909802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.909829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.909922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.909949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.910058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.910085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.910168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.910195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.910277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.910304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.910421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.910448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.910579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.910619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.910731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.910762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.910851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.910877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.910988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.911015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.501 [2024-11-18 20:37:27.911093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.501 [2024-11-18 20:37:27.911119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.501 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.911263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.911292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.911404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.911432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.911520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.911547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.911660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.911687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.911831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.911858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.911949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.911976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.912093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.912120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.912227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.912254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.912395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.912422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.912539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.912567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.912708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.912749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.912860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.912899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.912990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.913018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.913104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.913131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.913245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.913271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.913434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.913479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.913576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.913604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.913722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.913750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.913833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.913859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.913938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.913964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.914050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.914076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.914167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.914201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.914296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.914323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.914406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.914434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.914578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.914605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.914729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.914756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.914873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.914900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.915025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.915051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.915191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.915218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.915333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.915360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.915444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.502 [2024-11-18 20:37:27.915471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.502 qpair failed and we were unable to recover it. 00:36:16.502 [2024-11-18 20:37:27.915576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.915603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.915719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.915746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.915856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.915883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.916025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.916051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.916167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.916194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.916336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.916363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.916443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.916471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.916592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.916619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.916717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.916745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.916859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.916887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.917011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.917037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.917128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.917154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.917244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.917271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.917361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.917387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.917491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.917517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.917645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.917673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.917765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.917791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.917923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.917962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.918108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.918136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.918250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.918277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.918363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.918390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.918502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.918529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.918654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.918695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.918843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.918871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.918962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.918991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.919077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.919105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.919215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.919242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.919382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.919409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.919527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.919555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.919663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.919690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.919799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.919826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.919968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.919995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.920113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.920139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.920224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.920251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.920368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.920395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.920527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.920567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.920663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.920691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.503 [2024-11-18 20:37:27.920838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.503 [2024-11-18 20:37:27.920865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.503 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.920983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.921009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.921130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.921157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.921301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.921327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.921443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.921471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.921583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.921610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.921707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.921737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.921857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.921884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.922023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.922051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.922132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.922158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.922306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.922333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.922423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.922450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.922534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.922560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.922669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.922696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.922807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.922833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.922978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.923004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.923211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.923237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.923359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.923385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.923505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.923531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.923614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.923653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.923765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.923801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.923877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.923904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.924018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.924044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.924125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.924152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.924294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.924321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.924403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.924429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.924543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.924569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.924657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.924698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.924832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.924871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.925015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.925055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.925150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.925178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.925292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.925319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.925410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.925437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.925547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.925574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.925716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.925757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.925871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.925900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.926019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.926046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.926188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.926214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.926336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.504 [2024-11-18 20:37:27.926363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.504 qpair failed and we were unable to recover it. 00:36:16.504 [2024-11-18 20:37:27.926483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.926511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.926607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.926634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.926733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.926760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.926863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.926889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.927003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.927029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.927134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.927160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.927268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.927294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.927388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.927428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.927586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.927619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.927717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.927746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.927862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.927889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.928006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.928034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.928121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.928148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.928323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.928376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.928516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.928542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.928658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.928687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.928802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.928829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.928974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.929014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.929102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.929131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.929283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.929336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.929423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.929449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.929590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.929617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.929746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.929775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.929889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.929916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.930005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.930032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.930114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.930141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.930228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.930256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.930342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.930369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.930479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.930507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.930625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.930664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.930782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.930808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.930924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.930950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.931085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.931111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.931198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.931225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.931331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.931357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.931473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.931502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.931584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.931612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.505 qpair failed and we were unable to recover it. 00:36:16.505 [2024-11-18 20:37:27.931729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.505 [2024-11-18 20:37:27.931759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.931896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.931923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.932041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.932068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.932217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.932244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.932352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.932379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.932498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.932525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.932608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.932643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.932756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.932784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.932882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.932922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.933042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.933070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.933172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.933199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.933339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.933371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.933486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.933513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.933624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.933658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.933778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.933805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.933887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.933913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.933993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.934019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.934108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.934136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.934260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.934288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.934377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.934405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.934521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.934548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.934690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.934730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.934859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.934899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.935016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.935044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.935127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.935154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.935270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.935297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.935403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.935429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.935515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.935543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.935662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.935702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.935851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.935879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.935975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.936002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.936116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.936143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.936233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.936259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.936376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.936405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.936519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.936548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.936630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.936669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.936754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.936781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.936871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.506 [2024-11-18 20:37:27.936898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.506 qpair failed and we were unable to recover it. 00:36:16.506 [2024-11-18 20:37:27.937004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.937037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.937127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.937155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.937263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.937290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.937383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.937410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.937497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.937524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.937646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.937674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.937770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.937797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.937878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.937905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.938023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.938051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.938138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.938166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.938290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.938319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.938446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.938486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.938615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.938667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.938793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.938821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.938970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.938997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.939115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.939143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.939256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.939282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.939395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.939423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.939553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.939593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.939725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.939754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.939868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.939895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.939976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.940002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.940118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.940145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.940285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.940337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.940418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.940445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.940546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.940576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.940730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.940758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.940858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.940885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.940967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.940994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.941077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.941103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.941217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.941243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.941365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.941406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.941496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.507 [2024-11-18 20:37:27.941537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.507 qpair failed and we were unable to recover it. 00:36:16.507 [2024-11-18 20:37:27.941681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.941722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.941848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.941876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.941971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.941998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.942123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.942150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.942229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.942256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.942375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.942402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.942498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.942539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.942662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.942696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.942868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.942915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.943013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.943043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.943189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.943217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.943333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.943360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.943448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.943476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.943628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.943664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.943785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.943812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.943921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.943948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.944081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.944135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.944267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.944315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.944455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.944504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.944623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.944657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.944736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.944763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.944853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.944880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.944952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.944979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.945058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.945084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.945185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.945211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.945351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.945377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.945464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.945490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.945561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.945587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.945686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.945727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.945832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.945871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.945953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.945981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.946067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.946094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.946188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.946214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.946288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.946315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.946406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.946437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.946516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.946543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.946657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.946684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.946797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.508 [2024-11-18 20:37:27.946824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.508 qpair failed and we were unable to recover it. 00:36:16.508 [2024-11-18 20:37:27.946918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.946944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.947060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.947087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.947170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.947196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.947288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.947314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.947424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.947450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.947561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.947588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.947673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.947700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.947811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.947837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.947913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.947939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.948041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.948067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.948155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.948181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.948275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.948305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.948429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.948470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.948620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.948655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.948741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.948768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.948914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.948950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.949039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.949066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.949155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.949183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.949275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.949303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.949417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.949444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.949556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.949583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.949726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.949753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.949849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.949876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.949968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.950010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.950121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.950151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.950246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.950271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.950375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.950402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.950487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.950515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.950622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.950668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.950806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.950846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.950990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.951017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.951103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.951130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.951244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.951271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.951411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.951437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.951550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.951576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.951728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.951769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.951884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.951913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.509 qpair failed and we were unable to recover it. 00:36:16.509 [2024-11-18 20:37:27.952014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-11-18 20:37:27.952042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.952185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.952212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.952330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.952362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.952519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.952548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.952667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.952695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.952779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.952806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.952890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.952918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.953038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.953066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.953236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.953264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.953406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.953435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.953560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.953589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.953698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.953727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.953818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.953846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.953998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.954025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.954222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.954249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.954340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.954367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.954454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.954482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.954602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.954645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.954729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.954756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.954837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.954864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.954962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.954989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.955093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.955131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.955221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.955248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.955335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.955363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.955516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.955542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.955665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.955692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.955778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.955810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.955922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.955953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.956067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.956094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.956213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.956239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.956363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.956395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.956489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.956518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.956651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.956680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.956818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.956846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.956969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.957006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.957103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.957130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.957272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.957300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.510 [2024-11-18 20:37:27.957427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-11-18 20:37:27.957466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.510 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.957563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.957591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.957703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.957731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.957856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.957883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.957989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.958016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.958158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.958245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.958356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.958383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.958508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.958534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.958654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.958680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.958768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.958796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.958917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.958964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.959113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.959141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.959258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.959286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.959427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.959455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.959581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.959630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.959754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.959785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.959901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.959929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.960114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.960151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.960242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.960268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.960386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.960412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.960530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.960557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.960652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.960679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.960766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.960793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.960869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.960895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.961015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.961042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.961147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.961173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.961265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.961292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.961405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.961431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.961571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.961598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.961753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.961785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.961875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.961901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.962025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.962051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.962157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.962183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.962277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.962303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.962376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.962403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.962517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.962544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.962665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.962692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.962770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.962796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.511 [2024-11-18 20:37:27.962905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-11-18 20:37:27.962931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.511 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.963018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.963044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.963182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.963208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.963325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.963351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.963456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.963496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.963679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.963720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.963834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.963874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.964007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.964036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.964145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.964173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.964282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.964316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.964478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.964506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.964618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.964660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.964776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.964803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.964893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.964920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.965009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.965037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.965129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.965156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.965294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.965321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.965449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.965490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.965616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.965666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.965781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.965808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.965904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.965942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.966056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.966082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.966204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.966257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.966378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.966405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.966548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.966574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.966673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.966700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.966786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.966813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.966925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.966957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.967037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.967064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.967172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.967198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.967275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.967302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.967458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.967499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.967626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.512 [2024-11-18 20:37:27.967663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.512 qpair failed and we were unable to recover it. 00:36:16.512 [2024-11-18 20:37:27.967745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.967772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.967857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.967884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.968003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.968030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.968107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.968134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.968219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.968246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.968362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.968388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.968524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.968563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.968665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.968694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.968770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.968796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.968872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.968899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.969008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.969035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.969127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.969154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.969238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.969266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.969347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.969374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.969458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.969484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.969588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.969614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.969703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.969735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.969879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.969906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.970019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.970047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.970144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.970192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.970277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.970303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.970413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.970439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.970549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.970576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.970690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.970717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.970800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.970827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.970903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.970941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.971058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.971085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.971172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.971198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.971309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.971338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.971438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.971478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.971597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.971642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.971762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.971791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.971877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.971905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.972034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.972063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.972180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.972208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.972317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.972345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.972460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.513 [2024-11-18 20:37:27.972487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.513 qpair failed and we were unable to recover it. 00:36:16.513 [2024-11-18 20:37:27.972599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.972648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.972761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.972788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.972915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.972951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.973070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.973097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.973216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.973244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.973322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.973348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.973425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.973451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.973562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.973588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.973724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.973753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.973881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.973921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.974072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.974100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.974241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.974270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.974356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.974384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.974499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.974527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.974675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.974703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.974841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.974871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.974990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.975016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.975188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.975221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.975370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.975415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.975521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.975548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.975654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.975695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.975808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.975848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.975950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.975980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.976062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.976090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.976203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.976230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.976345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.976373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.976462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.976490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.976649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.976680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.976798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.976826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.976960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.976987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.977125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.977151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.977291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.977318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.977409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.977436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.977551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.977579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.977737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.977765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.977857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.977884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.978101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.978164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.514 qpair failed and we were unable to recover it. 00:36:16.514 [2024-11-18 20:37:27.978310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.514 [2024-11-18 20:37:27.978365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.978482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.978509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.978651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.978692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.978827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.978868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.978992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.979020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.979134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.979166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.979287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.979314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.979462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.979490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.979574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.979601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.979723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.979752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.979850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.979877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.979961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.979988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.980100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.980127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.980206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.980232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.980322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.980350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.980467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.980494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.980609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.980647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.980733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.980760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.980852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.980879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.981029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.981057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.981169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.981197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.981277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.981304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.981417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.981444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.981588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.981616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.981752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.981791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.981912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.981947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.982038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.982066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.982153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.982181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.982365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.982392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.982507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.982535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.982622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.982662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.982748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.982775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.982889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.982917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.983053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.983101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.983184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.983210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.983392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.983451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.983607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.983648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.983766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.515 [2024-11-18 20:37:27.983794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.515 qpair failed and we were unable to recover it. 00:36:16.515 [2024-11-18 20:37:27.983876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.983903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.984003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.984031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.984187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.984238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.984363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.984433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.984540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.984580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.984734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.984773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.984894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.984923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.985123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.985183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.985367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.985422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.985507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.985534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.985669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.985709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.985803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.985830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.985949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.985978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.986083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.986111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.986251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.986299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.986414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.986440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.986524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.986552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.986716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.986756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.986846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.986874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.986978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.987014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.987124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.987150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.987237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.987263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.987364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.987404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.987496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.987524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.987648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.987676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.987785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.987811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.987989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.988016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.988130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.988156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.988248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.988275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.988389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.988416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.988564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.988591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.988679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.988707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.516 qpair failed and we were unable to recover it. 00:36:16.516 [2024-11-18 20:37:27.988815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.516 [2024-11-18 20:37:27.988842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.988956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.988982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.989061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.989092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.989171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.989198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.989304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.989331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.989444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.989471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.989548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.989574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.989688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.989729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.989825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.989854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.989950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.989979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.990107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.990134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.990250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.990278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.990352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.990379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.990519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.990546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.990667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.990696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.990798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.990826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.990967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.991015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.991093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.991119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.991199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.991225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.991399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.991433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.991559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.991585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.991713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.991741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.991835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.991864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.991991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.992020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.992205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.992259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.992369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.992396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.992510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.992537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.992656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.992700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.992795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.992823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.992945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.992974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.993117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.993178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.993399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.993454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.993571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.993598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.993687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.993715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.993830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.993857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.993973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.994001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.994080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.994108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.994227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.517 [2024-11-18 20:37:27.994255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.517 qpair failed and we were unable to recover it. 00:36:16.517 [2024-11-18 20:37:27.994385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.994413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.994500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.994526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.994649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.994677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.994814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.994840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.995023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.995077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.995301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.995356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.995505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.995531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.995692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.995721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.995859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.995886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.996002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.996029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.996213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.996270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.996461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.996519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.996662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.996689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.996778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.996805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.996887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.996915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.997035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.997062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.997230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.997287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.997376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.997403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.997492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.997520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.997625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.997661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.997741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.997768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.997907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.997945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.998051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.998078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.998160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.998187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.998327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.998354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.998466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.998492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.998610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.998650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.998769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.998796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.998934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.998961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.999084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.999111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.999188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.999215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.999332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.999361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.999451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.999478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.999586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.999613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.999765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.999792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:27.999871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:27.999897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:28.000052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:28.000079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:28.000161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.518 [2024-11-18 20:37:28.000188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.518 qpair failed and we were unable to recover it. 00:36:16.518 [2024-11-18 20:37:28.000293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.000320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.000399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.000426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.000501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.000527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.000622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.000662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.000752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.000779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.000923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.000950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.001065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.001096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.001185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.001212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.001309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.001336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.001418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.001448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.001591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.001619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.001741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.001769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.001890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.001918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.002061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.002088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.002212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.002264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.002382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.002409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.002516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.002544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.002664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.002691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.002769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.002796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.002907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.002937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.003059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.003086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.003226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.003253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.003341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.003369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.003486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.003513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.003608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.003648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.003745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.003773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.003857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.003884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.004004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.004031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.004138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.004166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.004282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.004311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.004422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.004449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.004564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.004590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.004689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.004716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.004834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.004862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.005044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.005095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.005220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.005247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.005360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.005387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.519 qpair failed and we were unable to recover it. 00:36:16.519 [2024-11-18 20:37:28.005496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.519 [2024-11-18 20:37:28.005523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.005665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.005693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.005816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.005843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.005931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.005958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.006097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.006125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.006264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.006291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.006433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.006460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.006600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.006628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.006719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.006746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.006859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.006892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.007009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.007037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.007172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.007199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.007340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.007367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.007478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.007505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.007589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.007617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.007742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.007770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.007892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.007920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.008041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.008068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.008181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.008208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.008349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.008376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.008502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.008531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.008674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.008702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.008821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.008849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.009038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.009097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.009178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.009204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.009371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.009422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.009535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.009561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.009676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.009704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.009823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.009850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.009960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.009986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.010089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.010116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.010232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.010259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.010373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.010400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.010491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.010519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.010616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.010654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.010735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.010761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.010874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.010905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.010986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.520 [2024-11-18 20:37:28.011013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.520 qpair failed and we were unable to recover it. 00:36:16.520 [2024-11-18 20:37:28.011094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.011121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.011235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.011262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.011351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.011378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.011519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.011545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.011633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.011665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.011777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.011804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.011945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.011972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.012089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.012116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.012226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.012252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.012361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.012388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.012471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.012498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.012604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.012630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.012759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.012785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.012895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.012921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.013058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.013085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.013165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.013191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.013271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.013298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.013386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.013412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.013516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.013542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.013687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.013714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.013793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.013820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.013906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.013932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.014073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.014100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.014213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.014240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.014350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.014376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.014485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.014512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.014612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.014661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.014811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.014840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.014932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.014961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.015071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.015098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.015246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.015298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.015481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.015529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.015648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.015676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.521 [2024-11-18 20:37:28.015761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.521 [2024-11-18 20:37:28.015789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.521 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.015872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.015900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.015976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.016003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.016097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.016126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.016240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.016267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.016409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.016442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.016585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.016612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.016727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.016755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.016841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.016868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.016983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.017011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.017123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.017150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.017233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.017260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.017371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.017398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.017514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.017553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.017668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.017696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.017838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.017867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.017981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.018008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.018119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.018147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.018263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.018290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.018407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.018434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.018549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.018576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.018689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.018717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.018809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.018837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.018930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.018958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.019082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.019109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.019227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.019254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.019361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.019388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.019499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.019527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.019643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.019670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.019778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.019806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.019889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.019916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.020030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.020058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.020200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.020228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.020350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.020379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.020520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.020546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.020661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.020688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.020804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.020831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.020938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.522 [2024-11-18 20:37:28.020965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.522 qpair failed and we were unable to recover it. 00:36:16.522 [2024-11-18 20:37:28.021097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.021123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.021259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.021287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.021394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.021420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.021504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.021531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.021644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.021671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.021751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.021777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.021871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.021898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.022009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.022040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.022154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.022181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.022290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.022317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.022420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.022446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.022558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.022584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.022680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.022707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.022792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.022819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.022907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.022934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.023042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.023069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.023163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.023189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.023298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.023324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.023435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.023462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.023550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.023577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.023664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.023691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.023782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.023809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.023920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.023947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.024098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.024124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.024235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.024262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.024353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.024379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.024459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.024486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.024567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.024595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.024747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.024774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.024912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.024938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.025063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.025090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.025169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.025196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.025308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.025334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.025427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.025453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.025539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.025566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.025691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.025719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.025832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.025859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.025979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.523 [2024-11-18 20:37:28.026006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.523 qpair failed and we were unable to recover it. 00:36:16.523 [2024-11-18 20:37:28.026118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.026144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.026249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.026275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.026388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.026415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.026529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.026555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.026634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.026669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.026754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.026781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.026889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.026916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.027022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.027048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.027131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.027157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.027246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.027276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.027388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.027415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.027512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.027539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.027667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.027694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.027809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.027836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.027949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.027976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.028089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.028116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.028199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.028225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.028337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.028363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.028447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.028474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.028555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.028581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.028723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.028750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.028889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.028916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.029059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.029086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.029215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.029242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.029381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.029408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.029519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.029546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.029629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.029661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.029749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.029775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.029890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.029916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.029996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.030023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.030101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.030129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.030223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.030249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.030358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.030385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.030499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.030526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.030652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.030679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.030794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.030820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.030934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.030961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.524 qpair failed and we were unable to recover it. 00:36:16.524 [2024-11-18 20:37:28.031100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.524 [2024-11-18 20:37:28.031127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.031218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.031244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.031325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.031351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.031435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.031462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.031618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.031667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.031764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.031794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.031881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.031910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.032002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.032030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.032168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.032196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.032340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.032367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.032451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.032479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.032569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.032596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.032718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.032751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.032891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.032917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.033017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.033043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.033159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.033185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.033327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.033354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.033493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.033519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.033632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.033666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.033807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.033833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.033921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.033948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.034042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.034069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.034220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.034247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.034359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.034385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.034498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.034526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.034645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.034674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.034776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.034804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.034885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.034913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.035135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.035195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.035357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.035404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.035517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.035545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.035662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.035691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.035801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.035829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.035968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.525 [2024-11-18 20:37:28.035996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.525 qpair failed and we were unable to recover it. 00:36:16.525 [2024-11-18 20:37:28.036084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.036112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.036192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.036219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.036300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.036329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.036449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.036477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.036589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.036617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.036769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.036797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.036948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.036976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.037059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.037087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.037263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.037290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.037378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.037404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.037548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.037574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.037687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.037754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.037966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.038017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.038102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.038128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.038300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.038353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.038438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.038464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.038541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.038567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.038676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.038703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.038843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.038877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.038963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.038990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.039084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.039110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.039189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.039217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.039307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.039333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.039476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.039502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.039650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.039678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.039791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.039818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.039933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.039959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.040097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.040124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.040230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.040257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.040340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.040365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.040506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.040534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.040619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.040684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.040808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.040835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.040950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.040977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.041090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.041116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.041234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.041261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.041366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.526 [2024-11-18 20:37:28.041393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.526 qpair failed and we were unable to recover it. 00:36:16.526 [2024-11-18 20:37:28.041484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.041511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.041651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.041679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.041792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.041819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.041963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.041990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.042098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.042125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.042235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.042262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.042411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.042438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.042555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.042582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.042714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.042742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.042830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.042858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.042977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.043004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.043088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.043115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.043201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.043228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.043336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.043363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.043446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.043472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.043613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.043645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.043745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.043772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.043852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.043879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.044002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.044029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.044177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.044204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.044289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.044316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.044429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.044460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.044549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.044576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.044665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.044702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.044814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.044841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.044949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.044976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.045068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.045094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.045208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.045235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.045343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.045370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.045462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.045489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.045605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.045632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.045758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.045785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.045897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.045924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.046004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.046029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.046147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.046174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.046288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.046315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.046392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.046420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.527 [2024-11-18 20:37:28.046502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.527 [2024-11-18 20:37:28.046529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.527 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.046646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.046684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.046811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.046838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.046949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.046977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.047096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.047123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.047215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.047242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.047357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.047384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.047509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.047535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.047663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.047699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.047807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.047835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.047966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.047993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.048107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.048134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.048217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.048244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.048368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.048395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.048475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.048503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.048646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.048685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.048799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.048826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.048917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.048950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.049090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.049117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.049255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.049282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.049387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.049414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.049552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.049578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.049706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.049733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.049849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.049876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.049964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.049994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.050119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.050146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.050234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.050262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.050372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.050399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.050480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.050507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.050594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.050621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.050745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.050771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.050884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.050911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.051050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.051077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.051220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.051247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.051339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.051365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.051451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.051478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.051607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.051653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.051752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.528 [2024-11-18 20:37:28.051782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.528 qpair failed and we were unable to recover it. 00:36:16.528 [2024-11-18 20:37:28.051884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.051913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.052061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.052089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.052260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.052311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.052425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.052454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.052570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.052598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.052728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.052757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.052879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.052907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.053039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.053068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.053153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.053182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.053295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.053323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.053422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.053450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.053592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.053620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.053753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.053781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.053924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.053952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.054065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.054093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.054232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.054259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.054371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.054399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.054481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.054509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.054603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.054630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.054744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.054772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.054912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.054951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.055030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.055058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.055207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.055235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.055344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.055377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.055478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.055504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.055648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.055687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.055830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.055863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.055959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.055987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.056134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.056162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.056279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.056305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.056446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.056475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.056552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.056578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.056696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.056725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.056842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.056869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.056988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.057024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.057142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.057170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.057284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.057312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.529 [2024-11-18 20:37:28.057453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.529 [2024-11-18 20:37:28.057481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.529 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.057599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.057628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.057768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.057795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.057919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.057954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.058109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.058136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.058241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.058268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.058375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.058402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.058549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.058575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.058691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.058718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.058805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.058832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.058949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.058976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.059089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.059117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.059233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.059260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.059389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.059417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.059531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.059559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.059683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.059711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.059854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.059881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.060029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.060057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.060178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.060206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.060321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.060349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.060463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.060491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.060607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.060643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.060759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.060786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.060865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.060892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.061100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.061160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.061248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.061275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.061388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.061415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.061548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.061575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.061721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.061748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.061952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.062009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.062119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.062145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.062256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.062282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.062392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.062419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.062507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.530 [2024-11-18 20:37:28.062537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.530 qpair failed and we were unable to recover it. 00:36:16.530 [2024-11-18 20:37:28.062654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.062689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.062808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.062835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.062951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.062979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.063093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.063120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.063204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.063233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.063323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.063350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.063496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.063524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.063645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.063686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.063772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.063799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.063922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.063953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.064057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.064084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.064195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.064222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.064298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.064323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.064401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.064427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.064542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.064569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.064680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.064707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.064785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.064812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.064925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.064959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.065067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.065094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.065237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.065264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.065348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.065374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.065493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.065521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.065610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.065645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.065761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.065788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.065902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.065930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.066016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.066044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.066141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.066168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.066277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.066304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.066411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.066438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.066582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.066609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.066727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.066754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.066853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.066894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.066990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.067019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.067117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.067146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.067237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.067266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.067377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.067410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.067527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.067555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.531 [2024-11-18 20:37:28.067670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.531 [2024-11-18 20:37:28.067699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.531 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.067790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.067819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.067903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.067931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.068073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.068101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.068219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.068247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.068388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.068416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.068505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.068534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.068625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.068663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.068757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.068785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.068902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.068930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.069074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.069102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.069210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.069237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.069356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.069384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.069473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.069500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.069599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.069628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.069746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.069774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.069893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.069920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.070059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.070086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.070307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.070359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.070472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.070499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.070652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.070680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.070792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.070819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.070912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.070940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.071026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.071053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.071164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.071191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.071268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.071294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.071388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.071414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.071517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.071544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.071689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.071717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.071833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.071859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.071937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.071964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.072101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.072128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.072210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.072235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.072323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.072350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.072459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.072488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.072576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.072604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.072697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.072727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.072815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.072843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.073058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.073118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.073299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.532 [2024-11-18 20:37:28.073343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.532 qpair failed and we were unable to recover it. 00:36:16.532 [2024-11-18 20:37:28.073431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.073459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.073570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.073597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.073717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.073745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.073867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.073894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.073973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.073999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.074111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.074139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.074223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.074251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.074394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.074422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.074514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.074542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.074684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.074712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.074834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.074863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.074974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.075002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.075100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.075128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.075243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.075271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.075382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.075409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.075571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.075599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.075689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.075716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.075800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.075827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.075922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.075950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.076030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.076058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.076207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.076234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.076339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.076367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.076458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.076485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.076623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.076656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.076735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.076763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.076853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.076881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.077029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.077056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.077179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.077207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.077321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.077349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.077500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.077527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.077615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.077659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.077779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.077807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.077924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.077980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.078176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.078227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.078314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.078341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.078418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.078445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.078539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.078566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.078681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.078709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.078873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.078905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.079020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.079046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.533 [2024-11-18 20:37:28.079170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.533 [2024-11-18 20:37:28.079197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.533 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.079310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.079338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.079423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.079453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.079570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.079598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.079727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.079756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.079871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.079899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.080104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.080175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.080390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.080444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.080534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.080562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.080703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.080731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.080809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.080837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.080930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.080959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.081080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.081108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.081247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.081275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.081391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.081426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.081574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.081601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.081694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.081722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.081894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.081951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.082114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.082172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.082341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.082394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.082539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.082567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.082676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.082702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.082780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.082807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.082994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.083053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.083271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.083326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.083441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.083472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.083593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.083619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.083744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.083783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.083872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.083898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.084042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.084070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.084221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.084278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.084356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.084383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.084494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.084521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.084595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.084621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.084737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.084764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.084845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.084872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.084958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.084985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.534 [2024-11-18 20:37:28.085105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.534 [2024-11-18 20:37:28.085131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.534 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.085246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.085273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.085364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.085391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.085518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.085545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.085664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.085692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.085803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.085830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.085943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.085970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.086113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.086140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.086278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.086305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.086412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.086439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.086577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.086604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.086694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.086722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.086816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.086843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.086988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.087016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.087097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.087125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.087244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.087271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.087395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.087422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.087546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.087573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.087685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.087713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.087798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.087825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.087968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.087995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.088077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.088103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.088223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.088250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.088338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.088365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.088473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.088500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.088615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.088649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.088734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.088759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.088868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.088895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.088982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.089013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.089096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.089121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.089236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.089263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.089351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.089379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.089454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.089479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.089614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.089649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.089759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.089787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.089897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.089923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.090017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.090044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.090130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.090158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.090279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.090306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.090387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.090412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.090493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.090521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.090651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.090691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.090816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.090846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.090966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.090994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.535 [2024-11-18 20:37:28.091135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.535 [2024-11-18 20:37:28.091163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.535 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.091245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.091273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.091384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.091412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.091510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.091538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.091616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.091652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.091795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.091823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.091904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.091933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.092014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.092043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.092124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.092151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.092259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.092287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.092406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.092434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.092541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.092569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.092658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.092687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.092832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.092860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.092986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.093013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.093101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.093129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.093241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.093269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.093388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.093417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.093532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.093559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.093670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.093698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.093787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.093814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.093927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.093954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.094081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.094109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.094223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.094250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.094392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.094424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.094538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.094565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.094681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.094709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.094790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.094817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.094924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.094952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.095038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.095066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.095170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.095197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.095314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.095341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.095430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.095457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.095574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.095603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.095737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.095766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.095878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.095907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.096029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.096057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.096174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.096202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.096335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.096363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.096478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.096507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.096653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.096682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.096796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.096824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.096935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.096963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.097107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.097135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.536 [2024-11-18 20:37:28.097253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.536 [2024-11-18 20:37:28.097281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.536 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.097395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.097422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.097498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.097526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.097633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.097672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.097798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.097826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.097944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.097972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.098087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.098115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.098259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.098286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.098395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.098422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.098539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.098567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.098655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.098681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.098798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.098825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.098986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.099046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.099230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.099283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.099396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.099423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.099538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.099566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.099655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.099681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.099793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.099820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.099938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.099965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.100107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.100134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.100243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.100274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.100388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.100415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.100532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.100559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.100641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.100667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.100761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.100788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.100862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.100887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.100959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.100990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.101103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.101130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.101215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.101242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.101320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.101349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.101442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.101470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.101586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.101613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.101728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.101756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.101866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.101893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.102011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.102039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.102151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.102179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.102290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.102317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.102402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.102430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.102580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.102608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.102734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.102763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.102850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.102878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.102992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.103020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.103138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.103166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.103280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.103308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.537 qpair failed and we were unable to recover it. 00:36:16.537 [2024-11-18 20:37:28.103422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.537 [2024-11-18 20:37:28.103450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.103529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.103554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.103646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.103674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.103797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.103824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.103935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.103964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.104109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.104137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.104284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.104311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.104429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.104456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.104572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.104600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.104727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.104756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.104873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.104900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.105010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.105037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.105122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.105150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.105239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.105266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.105378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.105406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.105519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.105547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.105632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.105672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.105815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.105842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.105948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.105976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.106066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.106095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.106243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.106270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.106386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.106414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.106557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.106585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.106671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.106698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.106783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.106811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.106931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.106959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.107042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.107070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.107188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.107216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.107333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.107360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.107477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.107507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.107642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.107671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.107786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.107813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.107932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.107959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.108184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.108236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.108325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.108352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.108493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.108520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.108615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.108648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.108732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.108759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.108836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.108861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.538 qpair failed and we were unable to recover it. 00:36:16.538 [2024-11-18 20:37:28.108981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.538 [2024-11-18 20:37:28.109008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.109150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.109176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.109344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.109402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.109519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.109546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.109642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.109670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.109777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.109803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.109941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.109968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.110079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.110106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.110225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.110251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.110334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.110359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.110475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.110503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.110631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.110678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.110788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.110816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.110928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.110956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.111040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.111068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.111159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.111187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.111266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.111292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.111433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.111466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.111586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.111615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.111767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.111795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.111878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.111906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.112017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.112045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.112186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.112214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.112327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.112355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.112469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.112497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.112613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.112648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.112794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.112821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.112963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.112991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.113105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.113133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.113249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.113276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.113417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.113445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.113593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.113620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.113715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.113743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.113864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.113892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.114031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.114058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.114177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.114205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.114287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.114316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.114396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.114422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.114542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.114571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.114662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.114690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.114810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.114837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.115006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.115058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.115220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.115287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.115430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.115457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.115604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.115631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.115755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.115782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.539 [2024-11-18 20:37:28.115966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.539 [2024-11-18 20:37:28.116015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.539 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.116179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.116235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.116316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.116341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.116421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.116447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.116562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.116601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.116726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.116753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.116832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.116859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.116938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.116965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.117076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.117110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.117251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.117277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.117362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.117389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.117476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.117507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.117593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.117618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.117736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.117764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.117881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.117908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.118025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.118052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.118164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.118191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.118305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.118332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.118422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.118450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.118533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.118561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.118683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.118711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.118822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.118849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.118935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.118962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.119036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.119062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.119156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.119183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.119329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.119356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.119501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.119528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.119643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.119670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.119807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.119834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.119948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.119976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.120093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.120120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.120233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.120259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.120344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.120370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.120508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.120535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.120614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.120649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.120802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.120829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.120944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.120971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.121109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.121136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.121231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.121258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.121372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.121398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.121485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.121513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.121655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.121683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.121779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.121807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.121952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.121979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.122121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.122148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.122236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.122263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.122353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.122378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.540 [2024-11-18 20:37:28.122461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.540 [2024-11-18 20:37:28.122488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.540 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.122600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.122627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.122751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.122778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.122871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.122898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.123024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.123055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.123173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.123201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.123281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.123308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.123397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.123424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.123534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.123561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.123697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.123725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.123842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.123869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.124008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.124035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.124151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.124178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.124283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.124310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.124422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.124449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.124587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.124614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.124734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.124761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.124881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.124908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.125023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.125050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.125196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.125223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.125360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.125388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.125495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.125522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.125628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.125660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.125754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.125781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.125900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.125926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.126021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.126048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.126128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.126155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.126299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.126325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.126437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.126465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.126585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.126613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.126764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.126792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.126910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.126937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.127059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.127086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.127197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.127224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.127303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.127330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.127448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.127475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.127582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.127609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.127762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.127790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.127907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.127934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.128013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.128041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.128146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.128173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.128289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.128316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.128402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.128429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.128542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.128569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.128688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.128719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.128799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.128826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.128968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.128995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.541 [2024-11-18 20:37:28.129109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.541 [2024-11-18 20:37:28.129135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.541 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.129272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.129298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.129445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.129471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.129590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.129617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.129783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.129850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.130066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.130119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.130292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.130355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.130451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.130478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.130596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.130621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.130742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.130777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.130928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.130965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.131121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.131159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.131248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.131274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.131420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.131446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.131539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.131566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.131702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.131763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.131869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.131939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.132156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.132209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.132326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.132353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.132464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.132490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.132580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.132607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.132698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.132725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.132844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.132871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.133014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.133041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.133135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.133162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.133277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.133303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.133423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.133450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.133540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.133567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.133716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.133743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.133859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.133886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.133972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.133999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.134139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.134166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.134285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.134312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.134453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.134479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.134600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.134627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.134826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.134879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.135034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.135089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.135233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.135264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.135377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.135404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.135522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.135549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.135662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.135689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.135864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.135892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.136029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.136076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.136194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.542 [2024-11-18 20:37:28.136221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.542 qpair failed and we were unable to recover it. 00:36:16.542 [2024-11-18 20:37:28.136330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.136357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.136481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.136508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.136621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.136654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.136766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.136794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.136886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.136913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.137027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.137054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.137192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.137219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.137301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.137327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.137443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.137469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.137616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.137651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.137792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.137818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.137974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.138043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.138163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.138217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.138329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.138356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.138475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.138502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.138615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.138670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.138781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.138808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.138907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.138933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.139017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.139044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.139126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.139151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.139261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f970 is same with the state(6) to be set 00:36:16.543 [2024-11-18 20:37:28.139440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.139480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.139597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.139625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.139725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.139753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.139865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.139891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.140006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.140033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.140147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.140174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.140290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.140346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.140463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.140490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.140599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.140625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.140720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.543 [2024-11-18 20:37:28.140746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.543 qpair failed and we were unable to recover it. 00:36:16.543 [2024-11-18 20:37:28.140862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.140913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.141091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.141142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.141308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.141362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.141507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.141534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.141649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.141676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.141835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.141893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.142103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.142154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.142267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.142294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.142378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.142405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.142525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.142561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.142729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.142788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.142929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.142981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.143112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.143161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.143277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.143316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.143429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.143456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.143552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.143579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.143669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.143701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.143782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.143809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.143925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.143951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.144040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.144067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.144180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.144207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.144348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.144375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.144459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.144490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.144605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.144632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.144719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.144744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.144862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.144889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.144970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.144996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.145108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.145134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.145249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.145276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.145358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.145384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.145531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.145558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.145648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.145675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.145762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.145789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.145869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.145895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.146037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.146064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.146201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.146227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.146345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.146371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.146491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.544 [2024-11-18 20:37:28.146518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.544 qpair failed and we were unable to recover it. 00:36:16.544 [2024-11-18 20:37:28.146626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.146673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.146864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.146936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.147059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.147086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.147182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.147208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.147290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.147316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.147431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.147458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.147568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.147595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.147692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.147720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.147810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.147838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.147923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.147950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.148065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.148093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.148176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.148201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.148318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.148345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.148429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.148456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.148548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.148576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.148725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.148753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.148838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.148863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.148977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.149005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.149090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.149122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.149231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.149258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.149376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.149402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.149489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.149517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.149672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.149699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.149775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.149801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.149893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.149920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.150030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.150057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.150135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.150162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.150247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.150275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.150354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.150381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.150455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.150479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.150611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.150660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.150801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.150831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.150972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.151013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.151125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.151153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.151299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.151327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.151417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.151445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.545 [2024-11-18 20:37:28.151552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.545 [2024-11-18 20:37:28.151581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.545 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.151782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.151836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.151924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.151950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.152109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.152168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.152398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.152452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.152567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.152593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.152728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.152785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.153015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.153066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.153242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.153300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.153420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.153451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.153566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.153594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.153691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.153720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.153811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.153838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.153961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.153988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.154104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.154131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.154309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.154363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.154454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.154480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.154609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.154649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.154789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.154816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.155037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.155099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.155255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.155308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.155425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.155454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.155589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.155622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.155710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.155736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.155884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.155911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.156022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.156049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.156132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.156159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.156301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.156328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.156616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.156698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.156820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.156847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.157034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.157100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.157312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.157379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.157618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.157707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.157823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.157852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.158032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.158086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.158242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.158297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.158464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.158522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.158704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.158762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.546 qpair failed and we were unable to recover it. 00:36:16.546 [2024-11-18 20:37:28.158873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.546 [2024-11-18 20:37:28.158934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.159051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.159078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.159189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.159227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.159342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.159369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.159517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.159544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.159670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.159699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.159815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.159842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.159953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.159978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.160051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.160077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.160190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.160218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.160310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.160349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.160440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.160470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.160569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.160609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.160742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.160770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.160887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.160915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.161033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.161061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.161150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.161177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.161299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.161325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.161432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.161473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.161571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.161601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.161735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.161764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.161961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.162028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.162211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.162278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.162531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.162599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.162796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.162831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.162923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.162949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.163220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.163291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.163571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.163652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.163813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.163841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.163982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.164009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.164174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.164241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.547 [2024-11-18 20:37:28.164530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.547 [2024-11-18 20:37:28.164597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.547 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.164819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.164847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.164967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.164994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.165111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.165173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.165462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.165528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.165766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.165794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.165906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.165999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.166308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.166378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.166610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.166646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.166744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.166770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.166884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.166917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.166997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.167022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.167162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.167202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.167463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.167530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.167745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.167774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.167915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.167984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.168279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.168346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.168614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.168704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.168823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.168850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.168990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.169018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.169290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.169368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.169570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.169661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.169827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.169855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.169974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.170001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.170114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.170142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.170239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.170266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.170378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.170405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.170523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.170550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.170807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.170836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.170994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.171044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.171226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.171293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.171520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.171586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.171871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.171926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.172152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.172219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.172515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.172587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.172895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.172949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.173190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.173256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.548 [2024-11-18 20:37:28.173521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.548 [2024-11-18 20:37:28.173588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.548 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.173887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.173940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.174130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.174199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.174429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.174495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.174745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.174803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.175063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.175146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.175406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.175472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.175772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.175830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.176084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.176153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.176443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.176511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.176791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.176850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.177034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.177092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.177319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.177387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.177660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.177721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.177950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.178016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.178270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.178337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.178634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.178707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.178965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.179041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.179268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.179335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.179581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.179658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.179965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.180027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.180284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.180350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.180615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.180701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.180962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.181042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.181291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.181359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.181613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.181701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.181963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.182032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.182262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.182328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.182551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.182612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.182924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.182991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.183284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.183350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.183661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.183729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.184028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.184095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.184379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.184446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.184752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.184820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.185121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.185189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.185482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.185549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.185870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.549 [2024-11-18 20:37:28.185938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.549 qpair failed and we were unable to recover it. 00:36:16.549 [2024-11-18 20:37:28.186245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.186313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.186551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.186618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.186942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.187009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.187301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.187368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.187662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.187730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.188023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.188091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.188383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.188451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.188703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.188772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.189017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.189085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.189372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.189438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.189695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.189766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.190025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.190094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.190361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.190428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.190687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.190755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.191053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.191121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.191416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.191482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.191739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.191807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.192005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.192075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.192275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.192343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.192559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.192629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.192870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.192937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.193202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.193269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.193514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.193582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.193869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.193938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.194197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.194263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.194554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.194632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.194907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.194976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.195265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.195331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.195590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.195672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.195862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.195930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.196231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.196297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.196597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.196681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.196935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.197002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.197296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.197363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.197560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.197628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.197931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.197998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.198243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.198310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.198615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.550 [2024-11-18 20:37:28.198707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.550 qpair failed and we were unable to recover it. 00:36:16.550 [2024-11-18 20:37:28.199001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.199067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.199373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.199440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.199766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.199835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.200084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.200149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.200393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.200462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.200719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.200788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.201008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.201076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.201333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.201400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.201609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.201704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.201957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.202025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.202318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.202385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.202631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.202714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.202981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.203048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.203293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.203359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.203679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.203748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.204053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.204120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.204368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.204435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.204660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.204733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.204941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.205007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.205259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.205326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.205570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.205664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.205962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.206029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.206322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.206390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.206614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.206701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.206996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.207064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.207362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.207429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.207680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.207749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.207953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.208029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.208319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.208385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.208647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.208716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.208952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.209018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.209324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.209390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.209703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.209772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.210064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.551 [2024-11-18 20:37:28.210131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.551 qpair failed and we were unable to recover it. 00:36:16.551 [2024-11-18 20:37:28.210378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.210444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.210737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.210806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.211060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.211126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.211416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.211482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.211780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.211848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.212139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.212206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.212451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.212518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.212844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.212913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.213209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.213281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.213585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.213669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.213920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.213985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.214233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.214301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.214599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.214678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.214925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.214993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.215246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.215313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.215607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.215687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.215893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.215960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.216261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.216327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.216617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.216713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.216962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.217028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.217339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.217405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.217716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.217784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.218038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.218104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.218345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.218411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.218702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.218771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.219061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.219128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.219381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.219447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.219754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.219822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.220113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.220180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.220428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.220494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.220800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.220867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.221168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.221235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.221488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.221555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.221853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.221930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.222157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.222224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.222428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.222494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.552 qpair failed and we were unable to recover it. 00:36:16.552 [2024-11-18 20:37:28.222787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.552 [2024-11-18 20:37:28.222855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.223108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.223177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.223467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.223534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.223832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.223900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.224196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.224262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.224513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.224580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.224886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.224954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.225247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.225313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.225608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.225688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.225988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.226055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.226342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.226409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.226724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.226792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.227048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.227114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.227371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.227439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.227696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.227764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.228060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.228127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.228424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.228493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.228738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.228805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.229060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.229127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.229432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.229500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.229747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.229815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.230101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.230168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.230469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.230536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.230804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.230873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.231089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.231160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.231453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.231520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.231811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.231879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.232115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.232183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.232486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.232552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.232857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.232925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.233216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.233284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.233551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.233618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.233872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.233939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.234134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.234202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.234424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.234492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.234742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.234813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.235071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.235139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.235421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.235499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.553 qpair failed and we were unable to recover it. 00:36:16.553 [2024-11-18 20:37:28.235748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.553 [2024-11-18 20:37:28.235816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.236112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.236179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.236397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.236466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.236762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.236831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.237125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.237192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.237404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.237471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.237729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.237797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.237993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.238059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.238312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.238378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.238628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.238709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.238926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.238995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.239284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.239352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.239647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.239716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.239979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.240045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.240349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.240415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.240668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.240738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.241023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.241089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.241296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.241363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.241614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.241700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.241990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.242056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.242308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.242377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.242672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.242743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.243000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.243067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.243265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.243331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.243621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.243724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.243947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.244015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.244335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.244402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.244658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.244727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.245018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.245085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.245277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.245344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.245650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.245718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.246013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.246079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.246359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.246426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.246736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.246805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.247109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.247175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.247431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.247499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.247734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.247803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.248054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.554 [2024-11-18 20:37:28.248123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.554 qpair failed and we were unable to recover it. 00:36:16.554 [2024-11-18 20:37:28.248424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.248491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.248749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.248829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.249083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.249150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.249403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.249470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.249787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.249855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.250104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.250172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.250430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.250497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.250781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.250851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.251094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.251160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.251419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.251486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.251698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.251768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.252019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.252085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.252379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.252447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.252744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.252813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.253106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.253174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.253481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.253549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.253866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.253934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.254123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.254193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.254447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.254515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.254751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.254818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.255110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.255177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.255477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.255544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.255868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.255936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.256233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.256301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.256502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.256572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.256854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.256922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.257169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.257237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.257453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.257521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.257809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.257878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.258172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.258239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.258486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.258555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.258867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.258936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.259234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.259300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.259504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.259574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.259846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.259914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.260174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.260240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.555 [2024-11-18 20:37:28.260534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.555 [2024-11-18 20:37:28.260601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.555 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.260904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.260972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.261258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.261324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.261625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.261707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.262003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.262072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.262324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.262400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.262700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.262769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.262981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.263051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.263310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.263378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.263684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.263753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.263995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.264064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.264324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.264391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.264691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.264760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.265053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.265121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.265377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.265444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.265742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.265811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.266063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.266130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.266383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.266450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.266721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.266790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.267052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.267120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.267416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.267482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.267737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.267806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.268023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.268093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.268357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.268424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.268721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.268789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.269083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.269151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.269454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.269522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.269763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.269831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.270123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.270190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.270488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.270555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.270798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.270866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.271058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.271126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.271371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.271438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.556 [2024-11-18 20:37:28.271684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.556 [2024-11-18 20:37:28.271752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.556 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.272017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.272084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.272373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.272441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.272697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.272766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.273007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.273074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.273376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.273442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.273739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.273808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.273994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.274064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.274274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.274341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.274629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.274717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.274968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.275035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.275324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.275390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.275692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.275774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.276038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.276105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.276395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.276461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.276755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.276824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.277067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.277134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.277397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.277463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.277755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.277823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.278087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.278154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.278449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.278515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.278804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.278873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.279119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.279186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.279483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.279550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.279863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.279933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.280178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.280246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.280555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.280622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.280884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.280951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.281203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.281273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.281565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.281632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.281927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.281994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.282299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.282367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.282664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.282732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.282985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.283052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.283350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.283417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.283690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.283760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.283956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.284024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.284317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.284384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-11-18 20:37:28.284675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.557 [2024-11-18 20:37:28.284744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.284995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.285063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.285362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.285429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.285672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.285741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.285940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.286009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.286264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.286332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.286571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.286650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.286936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.287003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.287261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.287327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.287626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.287709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.288007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.288075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.288328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.288396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.288660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.288727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.289009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.289076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.289335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.289404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.289671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.289741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.289957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.290024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.290272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.290340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.290656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.290724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.290948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.291017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.291271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.291339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.291594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.291687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.291987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.292055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.292320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.292387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.292631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.292716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.293012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.293079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.293378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.293446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.293738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.293806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.294109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.294176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.294470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.294537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.294839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.294908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.295109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.295176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.295412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.295478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.295747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.295815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.296108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.296175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.296435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.296502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.296743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.296811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.297076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.297144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-11-18 20:37:28.297384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.558 [2024-11-18 20:37:28.297451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.297702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.297770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.297967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.298034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.298273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.298352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.298605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.298690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.298943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.299010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.299295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.299362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.299624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.299729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.300022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.300089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.300359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.300426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.300721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.300791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.300987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.301054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.301360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.301428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.301732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.301801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.302090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.302157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.302468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.302536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.302845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.302913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.303208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.303275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.303515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.303583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.303936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.304005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.304260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.304329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.304594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.304676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.304969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.305036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.305338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.305404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.305702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.305771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.306057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.306123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.306380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.306446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.306741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.306809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.307067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.307133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.307385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.307452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.307723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.307791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.308032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.308098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.308316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.308383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.308626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.308717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.308956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.309023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.309315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.309382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.309677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.309746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.310003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.559 [2024-11-18 20:37:28.310070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-11-18 20:37:28.310283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.310349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.310602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.310682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.310931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.310997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.311287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.311354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.311599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.311694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.311941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.312018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.312275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.312343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.312667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.312735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.313006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.313073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.313367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.313435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.313625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.313722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.314025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.314092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.314351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.314418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.314599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.314681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.314936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.315004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.315264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.315332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.315618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.315702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.315990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.316057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.316305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.316373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.316691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.316760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.317057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.317123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.317425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.317495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.317796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.317865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.318157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.318224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.318517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.318584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.318856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.318923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.319215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.319282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.319574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.319656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.319906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.319974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.320227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.320294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.320551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.320619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.320925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.320992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.321303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.321370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.321622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.321717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.321960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.322027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.322252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.322319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.322613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.322838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.323132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.560 [2024-11-18 20:37:28.323199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.560 qpair failed and we were unable to recover it. 00:36:16.560 [2024-11-18 20:37:28.323461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.323528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.323839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.323908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.324116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.324183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.324473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.324540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.324797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.324866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.325055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.325123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.325405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.325472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.325719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.325798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.326058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.326127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.326422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.326489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.326788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.326856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.327109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.327178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.327479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.327546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.327848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.327917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.328180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.328248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.328504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.328573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.328841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.328910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.329209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.329277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.329519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.329588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.329878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.329946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.330199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.330267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.330570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.330655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.330952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.331020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.331273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.331341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.331583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.331665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.331931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.331998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.332262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.332330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.332628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.332707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.332960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.333027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.333265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.333334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.333597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.333681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.333973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.334040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.334337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.561 [2024-11-18 20:37:28.334404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.561 qpair failed and we were unable to recover it. 00:36:16.561 [2024-11-18 20:37:28.334617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.334701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.335004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.335072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.335295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.335364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.335606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.335708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.335977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.336046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.336253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.336320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.336574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.336659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.336954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.337022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.337270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.337339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.337629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.337723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.338019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.338086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.338376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.338444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.338699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.338768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.339031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.339097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.339389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.339473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.339767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.339836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.340127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.340194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.340439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.340506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.340755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.340823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.341092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.341159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.341419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.341486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.341736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.341806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.342061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.342128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.342415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.342483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.342739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.342808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.343067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.343134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.343392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.343459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.343751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.343819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.344081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.344149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.344439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.344506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.344789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.344857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.345153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.345219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.345510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.345577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.345848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.345918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.346183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.346250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.346540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.346607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.346918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.346986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.562 [2024-11-18 20:37:28.347276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.562 [2024-11-18 20:37:28.347344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.562 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.347597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.347681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.347913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.347980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.348205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.348272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.348565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.348632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.348906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.348972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.349219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.349286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.349591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.349689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.349894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.349959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.350175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.350245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.350484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.350552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.350860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.350929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.351211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.351278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.351474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.351543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.351810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.351879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.352120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.352187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.352439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.352509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.352801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.352878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.353106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.353173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.353410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.353476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.353766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.353834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.354125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.354192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.354484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.354551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.354844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.354912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.355212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.355279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.355521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.355589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.355818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.355888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.356088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.356156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.356430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.356497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.356792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.356860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.357168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.357236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.357495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.357563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.357848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.357916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.358173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.358241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.358501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.358568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.358801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.358870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.359116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.359184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.359478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.359546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.563 qpair failed and we were unable to recover it. 00:36:16.563 [2024-11-18 20:37:28.359841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.563 [2024-11-18 20:37:28.359910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.360157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.360225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.360492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.360560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.360871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.360940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.361197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.361263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.361484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.361551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.361821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.361890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.362104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.362173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.362437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.362505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.362757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.362826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.363115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.363183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.363420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.363487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.363759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.363827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.364036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.364104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.364395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.364463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.364670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.364739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.364982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.365051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.365322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.365390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.365592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.365691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.365961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.366039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.366241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.366308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.366614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.366699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.366993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.367061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.367361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.367428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.367669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.367737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.367989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.368058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.368352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.368419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.368722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.368790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.369082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.369149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.369405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.369473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.369690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.369759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.370057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.370124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.370425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.370493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.370725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.370793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.371084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.371152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.371401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.371468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.371715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.371783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.372025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.564 [2024-11-18 20:37:28.372093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.564 qpair failed and we were unable to recover it. 00:36:16.564 [2024-11-18 20:37:28.372317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.372385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.372625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.372708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.372975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.373042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.373289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.373356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.373571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.373670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.373866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.373933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.374225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.374291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.374543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.374610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.374941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.375009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.375263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.375330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.375581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.375669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.375918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.375985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.376275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.376343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.376546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.376613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.376895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.376961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.377219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.377286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.377580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.377667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.377962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.378029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.378278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.378345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.378657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.378726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.378977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.379043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.379268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.379345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.379655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.379724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.380014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.380081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.380330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.380397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.380664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.380733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.380945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.381012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.381322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.381388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.381702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.381770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.381971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.382037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.382324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.382391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.382692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.565 [2024-11-18 20:37:28.382762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.565 qpair failed and we were unable to recover it. 00:36:16.565 [2024-11-18 20:37:28.383053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.383120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.383383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.383449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.383745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.383813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.384040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.384107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.384403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.384469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.384731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.384799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.385050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.385116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.385369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.385436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.385731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.385800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.386044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.386110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.386355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.386422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.386719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.386788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.387044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.387111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.387315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.387383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.387682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.387750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.387983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.388051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.388352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.388420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.388710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.388778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.388969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.389036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.389329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.389395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.389684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.389751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.390041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.390108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.390411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.390478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.390770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.390837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.391096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.391164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.391415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.391484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.391783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.391851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.392097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.392165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.392379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.392447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.392733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.392812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.393105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.393173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.393464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.393530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.393852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.393920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.394166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.394235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.394537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.394603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.394910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.394977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.395282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.395349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.566 qpair failed and we were unable to recover it. 00:36:16.566 [2024-11-18 20:37:28.395601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.566 [2024-11-18 20:37:28.395688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.395924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.395991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.396289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.396358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.396661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.396728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.397020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.397088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.397338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.397405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.397700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.397771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.398015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.398082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.398370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.398437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.398730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.398798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.399039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.399106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.399309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.399376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.399623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.399705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.399948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.400015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.400237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.400305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.400518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.400584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.400899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.400966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.401216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.401283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.401583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.401667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.401971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.402038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.402337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.402404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.402666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.402733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.402987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.403054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.403246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.403314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.403552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.403618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.403934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.404001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.404270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.404338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.404626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.404709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.404998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.405065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.405371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.405438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.405692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.405762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.406056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.406123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.406380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.406464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.406762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.406830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.407070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.407137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.407334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.407402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.407657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.407726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.408018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.567 [2024-11-18 20:37:28.408085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.567 qpair failed and we were unable to recover it. 00:36:16.567 [2024-11-18 20:37:28.408377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.408444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.408728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.408797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.409084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.409151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.409442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.409509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.409770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.409839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.410130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.410197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.410492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.410558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.410824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.410894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.411201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.411269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.411557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.411624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.411835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.411902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.412175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.412242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.412458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.412527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.412802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.412871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.413171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.413238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.413495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.413561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.413900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.413969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.414275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.414342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.414557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.414624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.414944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.415011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.415267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.415333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.415626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.415713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.416003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.416070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.416370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.416436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.416730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.416798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.417092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.417159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.417451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.417517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.417812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.417881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.418176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.418243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.418494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.418561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.418823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.418891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.419141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.419209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.419499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.419566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.419804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.419874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.420157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.420235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.420471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.420538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.420836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.420905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.421191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.568 [2024-11-18 20:37:28.421259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.568 qpair failed and we were unable to recover it. 00:36:16.568 [2024-11-18 20:37:28.421511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.421579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.421914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.421981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.422245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.422313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.422580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.422666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.422915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.422983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.423236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.423303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.423598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.423681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.423938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.424003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.424252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.424325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.424573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.424653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.424916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.424985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.425281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.425349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.425658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.425735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.425983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.426050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.426353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.426422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.426691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.426759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.426962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.427030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.427315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.427384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.427666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.427728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.427868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.427902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.428040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.428074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.428307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.428374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.428658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.428705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.429011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.429109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.429358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.429428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.429688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.429727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.429917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.429996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.430296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.430363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.430725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.430791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.431058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.431128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.431372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.431451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.431723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.431792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.432088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.432153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.432441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.432521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.432822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.432894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.433162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.433228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.433472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.433537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.569 [2024-11-18 20:37:28.433852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.569 [2024-11-18 20:37:28.433920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.569 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.434176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.434241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.434522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.434589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.434916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.434981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.435273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.435340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.435655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.435721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.435987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.436053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.436322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.436389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.436619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.436712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.436958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.437026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.437308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.437374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.437633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.437735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.438030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.438095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.438304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.438398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.438653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.438730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.439032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.439100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.439397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.439463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.439758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.439830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.440125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.440190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.440512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.440589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.440823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.440888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.441098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.441164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.441391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.441457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.441713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.441781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.442089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.442154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.442454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.442519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.442784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.442824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.442962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.442998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.443166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.443201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.443340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.443375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.443526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.443561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.443699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.443748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.443896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.443969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.444288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.444353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.570 qpair failed and we were unable to recover it. 00:36:16.570 [2024-11-18 20:37:28.444595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.570 [2024-11-18 20:37:28.444674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.444998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.445066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.445363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.445430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.445705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.445772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.446052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.446118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.446400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.446466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.446712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.446791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.447068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.447135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.447372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.447438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.447674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.447753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.448019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.448087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.448341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.448407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.448674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.448757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.449025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.449091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.449360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.449427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.449716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.449791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.450058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.450124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.450317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.450382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.450624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.450713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.450969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.451035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.451344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.451416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.451688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.451755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.452048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.452114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.452423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.452502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.452719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.452785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.453041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.453107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.453395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.453461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.453669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.453735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.453994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.454062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.454252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.454314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.454577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.454675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.454861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.454897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.455017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.455054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.455190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.455230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.455376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.455411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.455547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.455583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.455735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.455770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.455887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.455932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.571 [2024-11-18 20:37:28.456045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.571 [2024-11-18 20:37:28.456088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.571 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.456287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.456327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.456453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.456493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.456646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.456681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.456795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.456838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.456983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.457024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.457189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.457246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.457507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.457572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.457753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.457788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.457935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.457972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.458176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.458215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.458341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.458388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.458532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.458566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.458711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.458749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.458869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.458905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.459047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.459081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.459225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.459260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.459382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.459417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.459555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.459590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.459720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.459756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.459850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.459891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.460037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.460071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.460223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.460258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.460375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.460411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.460541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.460575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.460689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.460725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.460871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.460916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.461030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.461065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.461257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.461297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.461455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.461494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.461723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.461759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.461874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.461915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.462108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.462144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.462257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.462291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.462400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.462434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.462539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.462573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.462680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.462731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.462875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.462910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.572 [2024-11-18 20:37:28.463059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.572 [2024-11-18 20:37:28.463093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.572 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.463221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.463257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.463393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.463429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.463560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.463595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.463753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.463787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.463895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.463940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.464079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.464114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.464246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.464281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.464464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.464516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.464704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.464739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.464878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.464923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.465092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.465127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.465231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.465265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.465465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.465505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.465681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.465723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.465827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.465861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.466017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.466060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.466158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.466193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.466365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.466401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.466534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.466567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.466716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.466758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.466857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.466891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.467026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.467061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.467154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.467186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.467318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.467353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.467489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.467529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.467629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.467670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.467787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.467822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.467935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.467970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.468117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.468152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.468289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.468324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.468469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.468503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.468603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.468645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.468755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.468790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.468906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.468944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.469097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.469132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.469242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.469277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.469396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.469431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.469562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.573 [2024-11-18 20:37:28.469604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.573 qpair failed and we were unable to recover it. 00:36:16.573 [2024-11-18 20:37:28.469741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.469775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.469885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.469934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.470087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.470122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.470265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.470300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.470468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.470502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.470662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.470708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.470812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.470846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.470967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.471004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.471142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.471178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.471292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.471326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.471454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.471497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.471650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.471699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.471836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.471869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.472023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.472064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.472192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.472226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.472346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.472381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.472502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.472548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.472684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.472722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.472873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.472908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.473014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.473049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.473184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.473218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.473348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.473383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.473563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.473599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.473767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.473804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.473959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.473996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.474147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.474194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.474373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.474422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.474586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.474632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.474851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.474889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.475046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.475088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.475212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.475251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.475396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.475432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.475563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.475613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.475765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.475811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.475999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.476036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.476142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.574 [2024-11-18 20:37:28.476177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.574 qpair failed and we were unable to recover it. 00:36:16.574 [2024-11-18 20:37:28.476323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.575 [2024-11-18 20:37:28.476358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-11-18 20:37:28.476540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.575 [2024-11-18 20:37:28.476578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-11-18 20:37:28.476713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.575 [2024-11-18 20:37:28.476760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-11-18 20:37:28.476900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.575 [2024-11-18 20:37:28.476957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-11-18 20:37:28.477113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.575 [2024-11-18 20:37:28.477183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-11-18 20:37:28.477412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.575 [2024-11-18 20:37:28.477449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-11-18 20:37:28.477667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.575 [2024-11-18 20:37:28.477714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-11-18 20:37:28.477831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.575 [2024-11-18 20:37:28.477874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-11-18 20:37:28.478071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.575 [2024-11-18 20:37:28.478121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-11-18 20:37:28.478281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.862 [2024-11-18 20:37:28.478351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.862 qpair failed and we were unable to recover it. 00:36:16.862 [2024-11-18 20:37:28.478533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.862 [2024-11-18 20:37:28.478570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.862 qpair failed and we were unable to recover it. 00:36:16.862 [2024-11-18 20:37:28.478754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.862 [2024-11-18 20:37:28.478791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.862 qpair failed and we were unable to recover it. 00:36:16.862 [2024-11-18 20:37:28.478922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.862 [2024-11-18 20:37:28.478967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.862 qpair failed and we were unable to recover it. 00:36:16.862 [2024-11-18 20:37:28.479106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.862 [2024-11-18 20:37:28.479145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.862 qpair failed and we were unable to recover it. 00:36:16.862 [2024-11-18 20:37:28.479296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.862 [2024-11-18 20:37:28.479333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.862 qpair failed and we were unable to recover it. 00:36:16.862 [2024-11-18 20:37:28.479485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.862 [2024-11-18 20:37:28.479539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.862 qpair failed and we were unable to recover it. 00:36:16.862 [2024-11-18 20:37:28.479702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.862 [2024-11-18 20:37:28.479751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.862 qpair failed and we were unable to recover it. 00:36:16.862 [2024-11-18 20:37:28.479899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.862 [2024-11-18 20:37:28.479950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.862 qpair failed and we were unable to recover it. 00:36:16.862 [2024-11-18 20:37:28.480099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.862 [2024-11-18 20:37:28.480138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.862 qpair failed and we were unable to recover it. 00:36:16.862 [2024-11-18 20:37:28.480286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.862 [2024-11-18 20:37:28.480324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.862 qpair failed and we were unable to recover it. 00:36:16.862 [2024-11-18 20:37:28.480474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.862 [2024-11-18 20:37:28.480510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.862 qpair failed and we were unable to recover it. 00:36:16.862 [2024-11-18 20:37:28.480669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.862 [2024-11-18 20:37:28.480729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.862 qpair failed and we were unable to recover it. 00:36:16.862 [2024-11-18 20:37:28.480869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.862 [2024-11-18 20:37:28.480917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.862 qpair failed and we were unable to recover it. 00:36:16.862 [2024-11-18 20:37:28.481110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.481149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.481265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.481300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.481456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.481492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.481627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.481700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.481841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.481888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.482040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.482087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.482242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.482279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.482422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.482458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.482649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.482707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.482871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.482920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.483128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.483167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.483309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.483347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.483496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.483532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.483651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.483695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.483813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.483849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.484015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.484061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.484204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.484253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.484483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.484540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.484721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.484762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.484875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.484909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.485045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.485081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.485239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.485287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.485472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.485562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.485736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.485777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.485913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.485951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.486097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.486134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.486271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.486309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.486454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.486490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.486596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.486631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.486755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.486791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.486900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.486935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.487079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.487116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.487262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.487299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.487444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.487480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.487588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.487625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.487759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.487796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.863 qpair failed and we were unable to recover it. 00:36:16.863 [2024-11-18 20:37:28.487932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.863 [2024-11-18 20:37:28.487969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.488113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.488150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.488326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.488362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.488507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.488561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.488716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.488752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.488896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.488941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.489086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.489122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.489292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.489328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.489461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.489497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.489630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.489677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.489825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.489861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.489995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.490030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.490143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.490177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.490359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.490426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.490644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.490692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.490834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.490869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.491038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.491074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.491212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.491247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.491391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.491428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.491689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.491735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.491868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.491906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.492039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.492074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.492217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.492252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.492399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.492439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.492590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.492624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.492757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.492792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.492926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.492967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.493182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.493216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.493357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.493392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.493609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.493657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.493805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.493839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.493974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.494008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.494174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.494208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.494351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.494385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.494574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.494609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.494784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.494834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.494991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.495027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.495154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.864 [2024-11-18 20:37:28.495190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.864 qpair failed and we were unable to recover it. 00:36:16.864 [2024-11-18 20:37:28.495332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.495367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.495509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.495544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.495725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.495761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.495873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.495911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.496045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.496079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.496227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.496262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.496368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.496403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.496509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.496544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.496659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.496704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.496809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.496844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.496977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.497012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.497149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.497184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.497323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.497358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.497541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.497587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.497716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.497752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.497865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.497916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.498087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.498122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.498258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.498293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.498393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.498428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.498599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.498634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.498751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.498785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.498901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.498935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.499047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.499083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.499257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.499292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.499415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.499482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.499695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.499735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.499860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.499906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.500074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.500109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.500254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.500288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.500427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.500462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.500601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.500643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.500760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.500794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.500888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.500928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.501027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.501098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.501258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.501318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.501574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.501609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.501736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.501772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.865 [2024-11-18 20:37:28.501881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.865 [2024-11-18 20:37:28.501921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.865 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.502033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.502067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.502247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.502314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.502553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.502587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.502693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.502728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.502856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.502903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.503045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.503080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.503252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.503286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.503412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.503479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.503716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.503752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.503893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.503928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.504071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.504105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.504226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.504261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.504499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.504587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.504764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.504799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.504938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.504973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.505110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.505145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.505250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.505284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.505432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.505480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.505652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.505697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.505807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.505842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.505995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.506030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.506171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.506206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.506353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.506387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.506559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.506594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.506717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.506753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.506893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.506934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.507076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.507111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.507258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.507293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.507395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.507430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.507542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.507577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.507731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.507766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.507927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.507962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.508132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.508167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.508284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.508319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.508461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.508495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.508608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.508652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.508833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.866 [2024-11-18 20:37:28.508867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.866 qpair failed and we were unable to recover it. 00:36:16.866 [2024-11-18 20:37:28.509019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.509054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.509194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.509229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.509376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.509411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.509521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.509556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.509670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.509707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.509803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.509837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.509984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.510019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.510157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.510192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.510328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.510363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.510533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.510567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.510699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.510735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.510873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.510913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.511081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.511116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.511245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.511280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.511422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.511456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.511594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.511629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.511760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.511795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.511968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.512003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.512097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.512131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.512269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.512303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.512443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.512483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.512588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.512624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.512737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.512772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.512946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.512980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.513159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.513194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.513387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.513452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.513657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.513710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.513882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.513917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.514075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.514111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.514261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.514296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.867 qpair failed and we were unable to recover it. 00:36:16.867 [2024-11-18 20:37:28.514429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.867 [2024-11-18 20:37:28.514470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.514661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.514706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.514906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.514941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.515078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.515112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.515238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.515272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.515412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.515447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.515594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.515629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.515820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.515854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.515996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.516031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.516127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.516162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.516301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.516335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.516523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.516558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.516733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.516769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.516929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.516993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.517294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.517329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.517499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.517532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.517666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.517702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.517875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.517932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.518072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.518106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.518248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.518282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.518455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.518489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.518625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.518671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.518820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.518854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.518971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.519007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.519147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.519182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.519395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.519462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.519695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.519731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.519887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.519921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.520052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.520086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.520254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.520288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.520420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.520459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.520627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.520669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.520838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.520894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.521156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.521190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.521329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.521363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.521534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.521568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.521710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.521745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.521889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.868 [2024-11-18 20:37:28.521924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.868 qpair failed and we were unable to recover it. 00:36:16.868 [2024-11-18 20:37:28.522030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.522065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.522246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.522281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.522426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.522461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.522643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.522679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.522848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.522883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.523066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.523101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.523247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.523281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.523429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.523463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.523602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.523646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.523783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.523818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.523924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.523959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.524058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.524093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.524228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.524264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.524381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.524416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.524557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.524592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.524716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.524751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.524884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.524919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.525061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.525095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.525271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.525306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.525600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.525694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.525836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.525879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.526093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.526128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.526271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.526306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.526480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.526516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.526691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.526727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.526868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.526902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.527040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.527074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.527334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.527369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.527538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.527572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.527787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.527832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.528016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.528051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.528190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.528225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.528370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.528438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.528731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.528766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.528876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.528910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.529054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.529121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.529317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.869 [2024-11-18 20:37:28.529383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.869 qpair failed and we were unable to recover it. 00:36:16.869 [2024-11-18 20:37:28.529643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.529678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.529855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.529889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.530065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.530100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.530242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.530276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.530466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.530501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.530661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.530696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.530810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.530844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.530984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.531019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.531232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.531266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.531405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.531440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.531608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.531652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.531833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.531867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.532021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.532055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.532190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.532225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.532355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.532389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.532565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.532600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.532866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.532901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.533039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.533074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.533213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.533247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.533414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.533448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.533587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.533622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.533814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.533871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.534176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.534242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.534453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.534487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.534625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.534678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.534787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.534822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.534954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.534988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.535122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.535156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.535325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.535360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.535628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.535707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.535847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.535881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.535991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.536026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.536226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.536291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.536541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.536576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.536759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.536794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.537038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.537078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.537259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.537294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.537435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.870 [2024-11-18 20:37:28.537469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.870 qpair failed and we were unable to recover it. 00:36:16.870 [2024-11-18 20:37:28.537605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.537648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.537796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.537830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.537966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.538001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.538269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.538333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.538622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.538719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.538976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.539056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.539314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.539349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.539483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.539517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.539630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.539676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.539843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.539878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.540017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.540053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.540367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.540402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.540515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.540550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.540718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.540752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.540894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.540928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.541106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.541162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.541400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.541435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.541582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.541616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.541792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.541826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.541928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.541965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.542186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.542221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.542371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.542449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.542714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.542749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.542887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.542922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.543041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.543076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.543245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.543281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.543527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.543563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.543695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.543730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.543889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.543956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.544149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.544216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.544483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.544518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.544652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.544688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.871 [2024-11-18 20:37:28.544830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.871 [2024-11-18 20:37:28.544866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.871 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.545043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.545077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.545178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.545220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.545400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.545435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.545570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.545605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.545756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.545801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.546034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.546068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.546212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.546251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.546511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.546546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.546685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.546720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.546830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.546864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.547105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.547171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.547417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.547484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.547733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.547799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.548037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.548093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.548263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.548297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.548500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.548535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.548670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.548705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.548847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.548882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.549104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.549139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.549272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.549306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.549531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.549566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.549675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.549710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.549851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.549886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.550067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.550124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.550228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.550308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.550596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.550692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.550941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.550976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.551111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.551146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.551361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.551405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.551627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.551712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.551952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.551986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.552143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.552178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.552411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.552475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.552713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.552780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.553075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.553141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.553390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.553455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.553750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.553817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.872 qpair failed and we were unable to recover it. 00:36:16.872 [2024-11-18 20:37:28.554109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.872 [2024-11-18 20:37:28.554175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.554489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.554553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.554825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.554891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.555183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.555253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.555512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.555588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.555869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.555934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.556228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.556294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.556552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.556628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.556937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.557002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.557303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.557337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.557503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.557562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.557838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.557905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.558195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.558259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.558543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.558578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.558729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.558765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.559030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.559095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.559390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.559425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.559575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.559609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.559782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.559848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.560103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.560169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.560416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.560483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.560755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.560790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.560893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.560928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.561071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.561105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.561214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.561249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.561362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.561396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.563042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.563119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.563389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.563458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.563761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.563829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.564066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.564131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.564430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.564495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.564733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.564769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.564914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.564948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.565090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.565125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.565247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.565282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.565467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.565519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.565663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.565698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.565810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.873 [2024-11-18 20:37:28.565845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.873 qpair failed and we were unable to recover it. 00:36:16.873 [2024-11-18 20:37:28.565953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.565988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.566130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.566164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.566390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.566454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.566664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.566731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.566994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.567029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.567165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.567198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.567333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.567368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.567536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.567571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.567710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.567746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.567890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.567930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.568231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.568297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.568552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.568617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.568900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.568967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.569229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.569265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.569413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.569448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.569631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.569715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.570006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.570070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.570326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.570381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.570530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.570565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.570716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.570782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.571087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.571151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.571441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.571506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.571709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.571775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.572071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.572116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.572329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.572410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.572710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.572777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.573032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.573098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.573384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.573450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.573698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.573743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.573948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.573992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.574284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.574328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.574541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.574584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.574887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.574955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.575145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.575210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.575454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.575523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.575800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.575867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.576134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.576200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.874 [2024-11-18 20:37:28.576439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.874 [2024-11-18 20:37:28.576473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.874 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.576645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.576697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.576988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.577052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.577346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.577412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.577710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.577755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.577933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.577984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.578119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.578153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.578331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.578396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.578697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.578763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.579043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.579077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.579215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.579250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.579420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.579485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.579734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.579811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.580126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.580191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.580409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.580474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.580711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.580755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.580889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.580932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.581158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.581223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.581508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.581542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.581689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.581723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.581907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.581942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.582085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.582120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.582236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.582286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.582418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.582452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.582590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.582624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.582813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.582847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.583019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.583053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.583176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.583210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.583335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.583369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.583527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.583560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.583744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.583777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.583910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.583953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.584092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.584123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.584249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.584281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.584412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.584445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.584598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.584630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.584780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.584812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.584945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.584988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.875 [2024-11-18 20:37:28.585131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.875 [2024-11-18 20:37:28.585162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.875 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.585295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.585327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.585458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.585489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.585651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.585698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.585820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.585849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.585971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.586000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.586151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.586191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.586328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.586358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.586458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.586503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.586604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.586632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.586767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.586795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.586883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.586911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.586996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.587026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.587173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.587202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.587314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.587342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.587454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.587483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.587588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.587618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.587763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.587791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.587924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.587952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.588076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.588104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.588217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.588245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.588370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.588398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.588515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.588542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.588669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.588705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.588805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.588833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.588980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.589008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.589093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.589120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.589226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.589252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.589382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.589410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.589539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.589567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.589645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.589671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.589767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.589793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.589871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.589898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.876 [2024-11-18 20:37:28.590012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.876 [2024-11-18 20:37:28.590037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.876 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.590150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.590177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.590334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.590362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.590449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.590474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.590580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.590606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.590742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.590770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.590890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.590918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.591005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.591031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.592647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.592677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.592848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.592875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.593004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.593030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.593147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.593176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.593298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.593323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.593436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.593462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.593582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.593610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.593737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.593763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.593910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.593945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.594037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.594062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.594209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.594235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.594330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.594354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.594466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.594491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.594613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.594646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.594768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.594798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.594882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.594908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.594995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.595020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.595108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.595133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.597646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.597675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.597832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.597858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.598004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.598029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.598148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.598174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.598299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.598326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.598440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.598467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.598613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.598645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.598745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.598769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.598860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.598885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.598991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.599020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.599140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.599167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.599288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.599313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.877 qpair failed and we were unable to recover it. 00:36:16.877 [2024-11-18 20:37:28.599462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.877 [2024-11-18 20:37:28.599489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.599597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.599651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.599782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.599811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.599928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.599962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.600053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.600079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.600173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.600199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.600350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.600377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.600504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.600531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.600673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.600706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.600818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.600844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.600922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.600953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.601071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.601100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.601188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.601212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.601319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.601345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.601455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.601482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.601597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.601624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.601737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.601763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.601884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.601911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.602032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.602060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.602176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.602202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.602283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.602308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.602403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.602428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.602514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.602538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.602634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.602665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.603648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.603675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.603815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.603842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.603972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.603999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.604099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.604124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.604241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.604267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.604358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.604383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.604526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.604553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.604665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.604700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.604816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.604842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.604970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.604997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.607661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.607699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.607913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.607951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.608042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.608068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.608158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.608183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.878 [2024-11-18 20:37:28.608302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.878 [2024-11-18 20:37:28.608330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.878 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.608413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.608438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.608554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.608580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.608724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.608750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.608914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.608945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.609041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.609067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.609158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.609183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.609268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.609294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.609377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.609402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.609550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.609575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.609711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.609736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.609827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.609853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.609960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.609985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.610105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.610130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.610249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.610275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.610403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.610429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.610546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.610571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.610663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.610697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.610841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.610867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.610972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.610997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.611141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.611167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.611267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.611292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.611374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.611399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.611506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.611530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.611618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.611649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.611739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.611763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.613646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.613674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.613831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.613862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.613944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.613969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.614117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.614144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.614235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.614261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.614338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.614363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.614483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.614508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.614593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.614619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.614732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.614757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.614879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.614905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.614979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.615004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.615120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.615145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.879 [2024-11-18 20:37:28.615264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.879 [2024-11-18 20:37:28.615289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.879 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.615381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.615406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.615520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.615545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.615657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.615701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.615836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.615866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.615959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.615985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.616106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.616135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.616222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.616249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.616360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.616385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.616510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.616537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.616664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.616699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.616817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.616844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.616940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.616970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.617086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.617111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.617217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.617242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.618646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.618674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.618849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.618874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.619002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.619027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.619146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.619170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.619257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.619282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.619426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.619451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.619576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.619618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.619755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.619784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.619905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.619943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.620034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.620059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.620167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.620192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.620311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.620337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.620452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.620476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.620593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.620618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.620765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.620792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.620916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.620949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.621092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.621120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.621204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.621229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.621344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.621370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.621488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.621512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.621611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.621643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.623661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.623698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.623830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.623855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.880 [2024-11-18 20:37:28.623953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.880 [2024-11-18 20:37:28.623978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.880 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.624088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.624113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.624206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.624231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.624349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.624374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.624459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.624484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.624599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.624624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.624732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.624757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.624871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.624897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.625026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.625051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.625165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.625190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.625270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.625295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.625413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.625439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.625522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.625547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.625676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.625702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.625793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.625820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.625913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.625938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.626057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.626083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.626194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.626219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.626325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.626350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.626509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.626549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.626702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.626731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.626821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.626849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.627001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.627029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.628647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.628677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.628787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.628813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.628955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.628980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.629081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.629108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.629205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.629231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.629324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.629350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.629438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.629463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.629608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.629633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.629785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.629812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.629975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.630002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.881 [2024-11-18 20:37:28.630099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.881 [2024-11-18 20:37:28.630124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.881 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.630201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.630225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.630341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.630366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.630456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.630480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.630557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.630582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.630690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.630715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.630805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.630829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.630924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.630953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.633647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.633676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.633833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.633859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.633979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.634005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.634125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.634150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.634268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.634294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.634419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.634446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.634588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.634615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.634744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.634771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.634859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.634886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.634986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.635013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.635123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.635149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.635256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.635282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.635401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.635427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.635536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.635563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.635663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.635689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.635846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.635873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.635998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.636024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.636097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.636122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.636266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.636292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.636411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.636439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.636552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.636577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.636723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.636750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.636833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.636860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.637007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.637035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.637148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.637174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.637295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.637323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.637412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.637437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.637520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.637545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.637629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.637660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.637787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.637814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.882 [2024-11-18 20:37:28.637929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.882 [2024-11-18 20:37:28.637957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.882 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.638336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.638366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.638569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.638629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.638783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.638810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.638925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.638953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.639070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.639096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.639315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.639369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.639591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.639661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.639807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.639834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.640068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.640120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.640348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.640397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.640513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.640540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.640658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.640684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.640776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.640801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.640962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.641013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.641219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.641284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.641368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.641393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.641503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.641528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.641663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.641689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.641761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.641786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.641902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.641927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.642016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.642041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.642178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.642203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.642316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.642343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.642483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.642510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.642619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.642649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.642773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.642798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.642875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.642900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.643015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.643039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.643150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.643179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.643293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.643320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.643445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.643470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.643560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.643585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.643666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.643692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.643807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.643833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.643943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.643968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.644051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.644075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.644185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.644210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.644295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.883 [2024-11-18 20:37:28.644320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.883 qpair failed and we were unable to recover it. 00:36:16.883 [2024-11-18 20:37:28.644427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.644454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.644569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.644594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.644727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.644755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.644869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.644896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.644990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.645015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.645128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.645155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.645241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.645266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.645409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.645448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.645565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.645590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.645671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.645696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.645800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.645826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.645910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.645935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.646049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.646074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.646190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.646216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.646291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.646316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.646395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.646420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.646536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.646562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.646690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.646717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.646802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.646828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.646943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.646969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.647073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.647098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.647210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.647235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.647355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.647381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.647500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.647527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.647612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.647644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.647727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.647753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.647844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.647869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.647954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.647979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.648119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.648145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.648285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.648313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.648421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.648447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.648535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.648561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.648671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.648697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.648785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.648810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.648887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.648913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.649020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.649046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.649187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.649214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.649327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.884 [2024-11-18 20:37:28.649352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.884 qpair failed and we were unable to recover it. 00:36:16.884 [2024-11-18 20:37:28.649494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.649522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.649648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.649675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.649758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.649783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.649889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.649914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.649995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.650020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.650105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.650131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.650247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.650272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.650392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.650425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.650540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.650567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.650678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.650705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.650776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.650802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.650902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.650928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.651019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.651045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.651152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.651179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.651324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.651351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.651465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.651491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.651605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.651633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.651774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.651802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.651886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.651912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.652026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.652051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.652163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.652195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.652331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.652358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.652500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.652528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.652635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.652669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.652813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.652841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.652955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.652982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.653104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.653131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.653210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.653235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.653322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.653347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.653452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.653477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.653580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.653605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.653763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.653791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.653889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.653914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.885 [2024-11-18 20:37:28.653997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.885 [2024-11-18 20:37:28.654023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.885 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.654150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.654178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.654298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.654325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.654430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.654457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.654574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.654601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.654750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.654778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.654904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.654931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.655023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.655049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.655159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.655187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.655268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.655294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.655400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.655425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.655534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.655560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.655657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.655683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.655781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.655807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.655885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.655914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.655989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.656015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.656105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.656130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.656207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.656231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.656345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.656371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.656456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.656480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.656584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.656610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.656693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.656719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.656842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.656868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.656987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.657013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.657129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.657157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.657295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.657323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.657463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.657491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.657580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.657605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.657727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.657754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.657838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.657864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.657947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.657972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.658093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.658120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.658221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.658246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.658337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.658363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.658470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.658511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.658661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.658690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.658809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.658842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.658981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.659009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.886 qpair failed and we were unable to recover it. 00:36:16.886 [2024-11-18 20:37:28.659127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.886 [2024-11-18 20:37:28.659154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.659274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.659303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.659439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.659468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.659595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.659631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.659748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.659774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.659925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.659954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.660075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.660103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.660218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.660246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.660361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.660389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.660507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.660535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.660611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.660646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.660753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.660779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.660891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.660919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.661051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.661079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.661199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.661226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.661304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.661331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.661475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.661503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.661651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.661679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.661787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.661814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.661897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.661938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.662061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.662103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.662243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.662270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.662382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.662410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.662530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.662558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.662652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.662694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.662804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.662832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.662941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.662968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.663075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.663100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.663257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.663286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.663394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.663438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.663528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.663555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.663683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.663711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.663830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.663857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.663971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.663998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.664121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.664164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.664282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.664311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.664396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.664423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.664513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.887 [2024-11-18 20:37:28.664539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.887 qpair failed and we were unable to recover it. 00:36:16.887 [2024-11-18 20:37:28.664625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.664657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.664753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.664779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.664873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.664898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.665037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.665065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.665146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.665172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.665271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.665303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.665451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.665479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.665594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.665622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.665753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.665780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.665888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.665913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.666000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.666027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.666140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.666167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.666293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.666321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.666409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.666437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.666546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.666573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.666695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.666723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.666807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.666833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.666986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.667014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.667223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.667251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.667373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.667402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.667518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.667546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.667706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.667734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.667818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.667844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.667970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.667996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.668116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.668161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.668303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.668340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.668465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.668493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.668623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.668657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.668738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.668763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.668852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.668877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.668989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.669016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.669090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.669117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.669296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.669339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.669553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.669583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.669722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.669751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.669894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.669942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.670085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.888 [2024-11-18 20:37:28.670114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.888 qpair failed and we were unable to recover it. 00:36:16.888 [2024-11-18 20:37:28.670228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.670257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.670386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.670415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.670561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.670590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.670750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.670779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.670892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.670941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.671092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.671120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.671228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.671257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.671401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.671429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.671619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.671718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.671811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.671838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.672059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.672117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.672317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.672352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.672487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.672515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.672604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.672631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.672766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.672794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.672913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.672940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.673028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.673070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.673246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.673296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.673389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.673415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.673501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.673528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.673666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.673693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.673772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.673797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.673898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.673945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.674068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.674095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.674212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.674239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.674343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.674374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.674489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.674517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.674596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.674624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.674761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.674789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.674935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.674962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.675045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.675071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.675180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.675208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.675307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.675335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.675450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.675479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.675590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.675620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.675755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.675783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.675934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.675961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.889 [2024-11-18 20:37:28.676066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.889 [2024-11-18 20:37:28.676093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.889 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.676287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.676346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.676462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.676491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.676619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.676659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.676780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.676808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.676923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.676953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.677083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.677115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.677212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.677245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.677339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.677372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.677524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.677553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.677641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.677668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.677759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.677790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.677896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.677927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.678029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.678055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.678149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.678178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.678331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.678359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.678449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.678475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.678583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.678612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.678740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.678770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.678890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.678919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.679040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.679069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.679215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.679244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.679338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.679366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.679459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.679489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.679641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.679671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.679787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.679833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.679964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.680010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.680224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.680312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.680403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.680431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.680523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.680549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.680631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.680688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.680824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.680870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.680991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.681017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.681113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.681141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.681254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.890 [2024-11-18 20:37:28.681281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.890 qpair failed and we were unable to recover it. 00:36:16.890 [2024-11-18 20:37:28.681377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.681402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.681497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.681524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.681632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.681667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.681784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.681828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.681991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.682033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.682162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.682191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.682304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.682333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.682462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.682491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.682582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.682608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.682699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.682727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.682806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.682831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.682942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.682970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.683088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.683115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.683248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.683277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.683357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.683381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.683492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.683520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.683630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.683671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.683787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.683814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.683906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.683933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.684025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.684053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.684167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.684194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.684306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.684333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.684425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.684451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.684559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.684586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.684683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.684713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.684802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.684830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.684946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.684976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.685093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.685122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.685208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.685235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.685376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.685405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.685530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.685559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.685651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.685679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.685765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.685791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.685905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.685954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.686083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.686117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.686248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.686280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.686386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.686415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.686526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.686556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.686648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.686676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.686814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.686846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.686958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.891 [2024-11-18 20:37:28.686992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.891 qpair failed and we were unable to recover it. 00:36:16.891 [2024-11-18 20:37:28.687094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.687127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.687233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.687262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.687434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.687480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.687622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.687655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.687773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.687801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.687903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.687930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.688085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.688129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.688211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.688236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.688333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.688362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.688534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.688562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.688650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.688676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.688794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.688821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.688916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.688943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.689068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.689096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.689235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.689265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.689378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.689403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.689489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.689514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.689649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.689699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.689822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.689851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.689958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.690001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.690120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.690151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.690267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.690296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.690386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.690415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.690525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.690554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.690662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.690691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.690778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.690805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.690898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.690927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.691048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.691079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.691183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.691217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.691392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.691439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.691557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.691585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.691722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.691764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.691883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.691929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.692081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.692112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.692284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.692314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.692407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.692438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.692532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.692563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.692667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.692694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.692790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.692819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.692907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.692936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.892 [2024-11-18 20:37:28.693034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.892 [2024-11-18 20:37:28.693069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.892 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.693221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.693252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.693412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.693446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.693578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.693607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.693751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.693794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.693890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.693937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.694091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.694122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.694212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.694240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.694325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.694356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.694502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.694543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.694643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.694674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.694771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.694800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.694943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.694972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.695108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.695137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.695228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.695257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.695349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.695380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.695517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.695548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.695664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.695693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.695782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.695807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.695887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.695914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.695995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.696020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.696145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.696172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.696314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.696341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.696451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.696478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.696604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.696634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.696770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.696799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.696882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.696910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.697044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.697074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.697200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.697229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.697322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.697351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.697482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.697511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.697628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.697661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.697748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.697774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.697868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.697895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.698009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.698037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.698116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.698141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.698258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.698286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.698379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.698410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.698527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.698555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.698649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.698677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.698791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.698820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.698914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.698943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.699053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.699086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.699210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.893 [2024-11-18 20:37:28.699238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.893 qpair failed and we were unable to recover it. 00:36:16.893 [2024-11-18 20:37:28.699362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.699391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.699513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.699543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.699631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.699672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.699785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.699812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.699892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.699917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.700034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.700062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.700140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.700167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.700249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.700278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.700397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.700425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.700545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.700574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.700660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.700687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.700779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.700807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.700916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.700958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.701053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.701082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.701194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.701221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.701359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.701387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.701481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.701506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.701589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.701617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.701772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.701803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.701919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.701947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.702064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.702094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.702212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.702241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.702323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.702350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.702436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.702464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.702554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.702583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.702705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.702732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.702847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.702873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.702961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.702985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.703094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.703121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.703231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.703260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.703344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.703371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.703482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.703510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.703656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.703684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.703797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.703825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.703907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.703932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.704041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.704069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.704212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.704241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.704346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.704373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.704515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.704547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.894 qpair failed and we were unable to recover it. 00:36:16.894 [2024-11-18 20:37:28.704630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.894 [2024-11-18 20:37:28.704662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.704800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.704828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.704910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.704936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.705074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.705102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.705186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.705213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.705323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.705350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.705432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.705458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.705584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.705624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.705747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.705776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.705866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.705895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.705976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.706000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.706074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.706099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.706207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.706233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.706329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.706357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.706486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.706527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.706630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.706667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.706759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.706788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.706873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.706901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.706977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.707003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.707119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.707147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.707286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.707313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.707422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.707449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.707562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.707589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.707680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.707706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.707793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.707820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.707933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.707960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.708112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.708140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.708256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.708284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.708372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.708398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.708505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.708545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.708681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.708722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.708843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.708872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.708955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.708981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.709093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.709120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.709201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.709227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.709319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.709347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.709456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.709482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.709574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.709600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.709700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.709725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.709813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.709840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.709995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.710022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.710115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.710143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.710268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.710308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.710428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.710456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.895 [2024-11-18 20:37:28.710608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.895 [2024-11-18 20:37:28.710645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.895 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.710721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.710746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.710858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.710885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.710966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.710991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.711104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.711131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.711213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.711238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.711335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.711377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.711496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.711524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.711618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.711660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.711759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.711789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.711896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.711924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.712039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.712068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.712196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.712224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.712347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.712373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.712517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.712544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.712624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.712658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.712798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.712825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.712946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.712975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.713081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.713108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.713194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.713220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.713301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.713328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.713415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.713443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.713550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.713597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.713701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.713728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.713841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.713868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.713978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.714005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.714086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.714111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.714197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.714224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.714335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.714362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.714515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.714557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.714661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.714692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.714812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.714839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.714932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.714959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.715050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.715078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.715220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.715251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.715405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.715436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.715537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.715569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.715714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.715744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.715862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.715891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.716002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.716029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.716171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.716221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.716303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.716347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.716486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.716513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.716628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.716662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.716766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.896 [2024-11-18 20:37:28.716792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.896 qpair failed and we were unable to recover it. 00:36:16.896 [2024-11-18 20:37:28.716903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.716945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.717036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.717064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.717208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.717235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.717348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.717376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.717509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.717550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.717719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.717749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.717882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.717922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.718011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.718040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.718154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.718181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.718341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.718370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.718479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.718507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.718609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.718643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.718756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.718782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.718872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.718898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.719029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.719080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.719252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.719305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.719456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.719485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.719598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.719623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.719779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.719806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.719931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.719958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.720072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.720100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.720192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.720216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.720303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.720329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.720458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.720485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.720604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.720631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.720743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.720770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.720878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.720904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.720994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.721021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.721184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.721210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.721324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.721353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.721498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.721525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.721622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.721657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.721820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.721847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.721926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.721968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.722151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.722202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.722347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.722387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.722473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.722499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.722592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.722618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.722786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.722823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.722903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.722943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.723055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.723084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.723236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.723262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.723389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.723417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.897 [2024-11-18 20:37:28.723554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.897 [2024-11-18 20:37:28.723595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.897 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.723729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.723765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.723881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.723909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.724105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.724141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.724379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.724430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.724589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.724618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.724735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.724763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.724876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.724904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.725135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.725188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.725403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.725457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.725674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.725706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.725836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.725871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.726030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.726091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.726369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.726439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.726671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.726698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.726801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.726829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.726946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.726974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.727105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.727137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.727287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.727315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.727571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.727658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.727804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.727832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.727915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.727942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.728065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.728094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.728275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.728307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.728462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.728494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.728715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.728743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.728850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.728890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.729068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.729097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.729188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.729218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.729313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.729343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.729534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.729599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.729725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.729755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.729833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.729858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.730077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.730142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.730357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.730384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.730498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.730525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.730610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.730641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.730729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.730757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.730876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.730905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.731016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.731044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.731153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.898 [2024-11-18 20:37:28.731181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.898 qpair failed and we were unable to recover it. 00:36:16.898 [2024-11-18 20:37:28.731261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.731291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.731442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.731508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.731724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.731763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.731844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.731870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.732013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.732041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.732224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.732277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.732464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.732522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.732678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.732721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.732852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.732883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.733028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.733057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.733263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.733330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.733631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.733669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.733816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.733843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.733950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.733977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.734063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.734091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.734182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.734210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.734403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.734470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.734697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.734727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.734870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.734899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.735149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.735185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.735353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.735389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.735622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.735672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.735802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.735831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.735946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.735991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.736144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.736176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.736441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.736471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.736585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.736615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.736760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.736788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.736870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.736916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.737079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.737134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.737332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.737386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.737606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.737641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.737732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.737759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.737838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.737865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.738020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.738079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.738320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.738351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.738529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.738559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.738718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.738747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.738868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.738906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.739026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.739055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.739236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.739311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.739448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.739491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.739610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.739667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.739783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.739822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.739946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.739973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.740086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.740114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.740258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.899 [2024-11-18 20:37:28.740287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.899 qpair failed and we were unable to recover it. 00:36:16.899 [2024-11-18 20:37:28.740405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.740434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.740535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.740577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.740711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.740743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.740864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.740893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.741011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.741040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.741133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.741162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.741245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.741270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.741395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.741423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.741518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.741545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.741624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.741656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.741774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.741801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.741908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.741936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.742027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.742054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.742173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.742201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.742313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.742340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.742434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.742465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.742543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.742568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.742705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.742748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.742877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.742907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.743025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.743054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.743216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.743245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.743333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.743362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.743508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.743538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.743654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.743694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.743816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.743845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.743938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.743966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.744108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.744166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.744298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.744343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.744465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.744495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.744593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.744621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.744794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.744839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.744976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.745007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.745124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.745154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.745343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.745403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.745543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.745572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.745691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.745720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.745843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.745872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.746023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.746077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.746314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.746382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.746573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.746605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.746751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.746782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.746915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.746964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.747104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.747157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.747395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.747427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.747562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.747589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.747737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.747786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.747923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.747980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.748108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.748135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.748249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.748277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.748397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.748424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.900 qpair failed and we were unable to recover it. 00:36:16.900 [2024-11-18 20:37:28.748516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.900 [2024-11-18 20:37:28.748544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.748686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.748714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.748839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.748870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.748967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.748996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.749086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.749115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.749234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.749263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.749378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.749407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.749516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.749544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.749670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.749699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.749809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.749837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.749933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.749962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.750060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.750089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.750236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.750265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.750389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.750424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.750603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.750632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.750751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.750779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.750883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.750916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.751074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.751125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.751284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.751340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.751483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.751511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.751629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.751661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.751832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.751862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.752015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.752067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.752188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.752219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.752337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.752365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.752457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.752484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.752575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.752604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.752699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.752724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.752820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.752849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.752955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.752996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.753147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.753177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.753300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.753330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.753424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.753453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.753597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.753626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.753755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.753784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.753889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.753935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.754038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.754098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.754242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.754295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.754373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.754399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.754501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.754543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.754671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.754724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.754832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.754863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.755030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.755066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.755205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.755264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.755473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.755527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.755655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.755685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.755782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.755811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.755957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.755986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.756105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.756166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.756272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.756343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.901 [2024-11-18 20:37:28.756564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.901 [2024-11-18 20:37:28.756594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.901 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.756692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.756720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.756810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.756839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.756950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.756984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.757161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.757196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.757334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.757370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.757571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.757628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.757776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.757810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.757971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.758029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.758192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.758238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.758364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.758409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.758493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.758521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.758625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.758679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.758799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.758838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.758933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.758962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.759133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.759168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.759272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.759308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.759413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.759449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.759621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.759654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.759763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.759790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.759962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.760008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.760141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.760189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.760393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.760439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.760566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.760594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.760718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.760764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.760904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.760953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.761061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.761089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.761179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.761206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.761294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.761320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.761445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.761472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.761612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.761648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.761740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.761766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.761879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.761907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.761993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.762019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.762113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.762141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.762228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.762253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.762364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.762391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.762487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.762526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.762678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.762709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.762800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.762830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.762951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.762980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.763095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.763122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.763216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.763242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.763359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.763386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.763495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.763522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.902 qpair failed and we were unable to recover it. 00:36:16.902 [2024-11-18 20:37:28.763644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.902 [2024-11-18 20:37:28.763674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.763818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.763845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.763928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.763956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.764069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.764096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.764212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.764240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.764355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.764381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.764495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.764523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.764664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.764691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.764787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.764819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.764924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.764966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.765095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.765126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.765241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.765270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.765390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.765420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.765563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.765592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.765716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.765745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.765885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.765920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.766016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.766051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.766189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.766224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.766392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.766444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.766534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.766560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.766689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.766717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.766932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.766961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.767071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.767100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.767192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.767218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.767338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.767367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.767451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.767479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.767598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.767626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.767747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.767775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.767895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.767923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.768040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.768068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.768231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.768260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.768373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.768402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.768496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.768525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.768650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.768680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.768862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.768891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.769069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.769113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.769215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.769276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.769421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.769449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.769563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.769590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.769725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.769754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.769894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.769939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.770116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.770145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.770268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.770322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.770445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.770472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.770612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.770647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.770737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.770765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.770851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.770880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.771032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.771062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.771244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.903 [2024-11-18 20:37:28.771293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.903 qpair failed and we were unable to recover it. 00:36:16.903 [2024-11-18 20:37:28.771376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.771405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.771487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.771512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.771594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.771622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.771748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.771776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.771893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.771921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.772054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.772084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.772184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.772212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.772361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.772388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.772508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.772557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.772685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.772720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.772873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.772902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.773065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.773118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.773343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.773400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.773546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.773594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.773761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.773790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.773939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.773990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.774122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.774179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.774315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.774345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.774476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.774516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.774619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.774650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.774772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.774800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.774888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.774913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.774993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.775021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.775132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.775159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.775245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.775274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.775357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.775384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.775555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.775586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.775734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.775784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.775908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.775939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.776036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.776065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.776179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.776207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.776349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.776378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.776468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.776498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.776634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.776685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.776837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.776867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.777002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.777032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.777168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.777224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.777393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.777425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.777527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.777556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.777656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.777691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.777812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.777841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.777961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.777990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.778120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.778166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.778294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.778339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.778452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.778482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.778687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.778716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.778832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.778861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.778974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.779003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.779113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.779141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.779284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.779315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.779446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.779493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.779612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.779651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.779788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.779817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.904 [2024-11-18 20:37:28.779963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.904 [2024-11-18 20:37:28.779991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.904 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.780082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.780112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.780300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.780329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.780488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.780518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.780666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.780713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.780857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.780885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.781050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.781079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.781196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.781226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.781415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.781469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.781651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.781681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.781804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.781832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.782002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.782055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.782299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.782352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.782606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.782687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.782805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.782834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.782947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.782975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.783085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.783114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.783324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.783378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.783562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.783592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.783744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.783774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.783871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.783920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.784123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.784152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.784241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.784314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.784553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.784609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.784752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.784781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.784893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.784948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.785132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.785195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.785365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.785421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.785558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.785595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.785734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.785763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.785848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.785876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.785984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.786024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.786218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.786285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.786477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.786545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.786782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.786812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.786933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.787011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.787302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.787369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.787682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.787711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.787856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.787884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.787979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.788012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.788131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.788160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.788324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.788354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.788651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.788703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.788794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.905 [2024-11-18 20:37:28.788820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-11-18 20:37:28.788905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.788932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.789163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.789192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.789339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.789369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.789555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.789622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.789823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.789864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.789973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.790002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.790121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.790148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.790294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.790322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.790436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.790463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.790567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.790596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.790760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.790792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.790909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.790938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.791055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.791083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.791205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.791233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.791354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.791382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.791501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.791530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.791675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.791705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.791827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.791855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.792013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.792081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.792372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.792400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.792666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.792696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.792855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.792884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.793031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.793065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.793369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.793436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.793694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.793724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.793868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.793898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.793986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.794069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.794266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.794333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.794615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.906 [2024-11-18 20:37:28.794699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-11-18 20:37:28.794817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.794846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.794996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.795064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.795355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.795422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.795667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.795696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.795820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.795849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.795971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.796000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.796144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.796222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.796525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.796592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.796778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.796807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.796895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.796923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.797124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.797191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.797470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.797537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.797751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.797781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.797904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.797933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.798280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.798346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.798657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.798726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.799011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.799078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.799294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.799364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.799625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.799709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.799931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.799999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.800242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.800306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.800512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.800579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.800888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.800958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.801235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.801264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.801383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.801412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.801592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.801691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.801992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.802059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.802312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.802377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.802627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.802718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.803009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.803076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.803325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.803391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.803691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.803759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.804013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.804080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-11-18 20:37:28.804319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.907 [2024-11-18 20:37:28.804396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.804681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.804749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.804992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.805059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.805349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.805416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.805690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.805758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.806051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.806119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.806415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.806444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.806564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.806594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.806817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.806887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.807150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.807218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.807510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.807575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.807811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.807881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.808185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.808263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.808550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.808617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.808886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.808954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.809239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.809306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.809575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.809668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.809954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.810022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.810324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.810391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.810656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.810735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.811003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.811069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.811326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.811393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.811675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.811705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.811800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.811828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.811979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.812008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.812295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.812360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.812545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.812612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.812840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.812875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.812995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.813025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.813125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.813153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.813240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.813268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.813391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.813420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.813666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.813734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.814023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.908 [2024-11-18 20:37:28.814090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.908 qpair failed and we were unable to recover it. 00:36:16.908 [2024-11-18 20:37:28.814337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.814405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.814625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.814710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.814999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.815067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.815306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.815371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.815644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.815686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.815833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.815882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.816098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.816165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.816429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.816496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.816751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.816819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.817071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.817137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.817392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.817458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.817708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.817778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.817971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.818036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.818323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.818390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.818612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.818698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.818984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.819052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.819379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.819454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.819705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.819773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.820010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.820077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.820371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.820438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.820713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.820742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.820833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.820861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.820982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.821012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.821114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.821142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.821251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.821281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.821458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.821525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.821806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.821837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.821926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.821954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.822066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.822096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.822247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.822313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.822559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.822588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.822740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.822770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.822869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.822897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.822986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.823022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.823140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.909 [2024-11-18 20:37:28.823169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.909 qpair failed and we were unable to recover it. 00:36:16.909 [2024-11-18 20:37:28.823408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.823475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.823792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.823860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.824156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.824222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.824498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.824565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.824826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.824896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.825115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.825183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.825477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.825544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.825827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.825898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.826118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.826184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.826480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.826546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.826810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.826841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.826960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.826990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.827180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.827247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.827598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.827687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.827945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.828013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.828285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.828315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.828462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.828492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.828666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.828734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.829027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.829095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.829388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.829454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.829752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.829821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.830111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.830177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.830473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.830538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.830853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.830920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.831111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.831177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.831470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.831500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.831648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.831678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.831931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.831998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.910 [2024-11-18 20:37:28.832275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.910 [2024-11-18 20:37:28.832341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.910 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.832657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.832736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.832977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.833045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.833329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.833396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.833694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.833763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.834009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.834076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.834367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.834434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.834722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.834753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.834900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.834930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.835080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.835110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.835409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.835443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.835562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.835590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.835709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.835739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.835858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.835888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.836008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.836037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.836297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.836363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.836665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.836733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.836997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.837063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.837307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.837374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.837694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.837763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.838048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.838116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.838413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.838480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.838784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.838852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.839075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.839143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.839445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.839513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.839805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.839873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.840170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.840237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.840542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.840611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.840927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.840993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.841255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.841321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.841562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.841629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.841943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.842009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.842280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.842347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.911 [2024-11-18 20:37:28.842642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.911 [2024-11-18 20:37:28.842681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.911 qpair failed and we were unable to recover it. 00:36:16.912 [2024-11-18 20:37:28.842826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.912 [2024-11-18 20:37:28.842856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:16.912 qpair failed and we were unable to recover it. 00:36:17.191 [2024-11-18 20:37:28.842966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.191 [2024-11-18 20:37:28.842994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.191 qpair failed and we were unable to recover it. 00:36:17.191 [2024-11-18 20:37:28.843223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.191 [2024-11-18 20:37:28.843254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.191 qpair failed and we were unable to recover it. 00:36:17.191 [2024-11-18 20:37:28.843387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.191 [2024-11-18 20:37:28.843438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.191 qpair failed and we were unable to recover it. 00:36:17.191 [2024-11-18 20:37:28.843722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.191 [2024-11-18 20:37:28.843753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.191 qpair failed and we were unable to recover it. 00:36:17.191 [2024-11-18 20:37:28.843852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.191 [2024-11-18 20:37:28.843880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.191 qpair failed and we were unable to recover it. 00:36:17.191 [2024-11-18 20:37:28.844046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.191 [2024-11-18 20:37:28.844112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.191 qpair failed and we were unable to recover it. 00:36:17.191 [2024-11-18 20:37:28.844418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.191 [2024-11-18 20:37:28.844485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.191 qpair failed and we were unable to recover it. 00:36:17.191 [2024-11-18 20:37:28.844773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.191 [2024-11-18 20:37:28.844842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.191 qpair failed and we were unable to recover it. 00:36:17.191 [2024-11-18 20:37:28.845093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.191 [2024-11-18 20:37:28.845162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.191 qpair failed and we were unable to recover it. 00:36:17.191 [2024-11-18 20:37:28.845416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.191 [2024-11-18 20:37:28.845483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.191 qpair failed and we were unable to recover it. 00:36:17.191 [2024-11-18 20:37:28.845789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.191 [2024-11-18 20:37:28.845857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.191 qpair failed and we were unable to recover it. 00:36:17.191 [2024-11-18 20:37:28.846070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.191 [2024-11-18 20:37:28.846138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.191 qpair failed and we were unable to recover it. 00:36:17.191 [2024-11-18 20:37:28.846448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.846525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.846786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.846854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.847104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.847173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.847466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.847545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.847850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.847918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.848159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.848189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.848333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.848363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.848519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.848587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.848860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.848891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.849010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.849038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.849213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.849281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.849512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.849579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.849903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.850009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.850255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.850328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.850587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.850684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.850995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.851063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.851353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.851382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.851486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.851514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.851600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.851628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.851763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.851792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.852024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.852092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.852400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.852466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.852709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.852778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.853026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.853057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.853177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.853205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.853327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.853356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.853599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.853684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.853949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.854016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.854275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.854344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.854630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.854667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.854773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.854802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.854959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.855028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.855324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.855391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.855669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.855706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.855850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.855880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.856102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.856168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.856422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.856489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.192 qpair failed and we were unable to recover it. 00:36:17.192 [2024-11-18 20:37:28.856741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.192 [2024-11-18 20:37:28.856809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.857065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.857134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.857423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.857453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.857599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.857629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.857931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.857998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.858302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.858369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.858684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.858764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.859023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.859090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.859386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.859416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.859533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.859561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.859678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.859707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.859855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.859885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.860070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.860100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.860217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.860245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.860339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.860367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.860487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.860523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.860735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.860804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.861020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.861088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.861283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.861349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.861607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.861667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.861794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.861823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.862016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.862084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.862327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.862397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.862664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.862734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.863026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.863056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.863197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.863228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.863508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.863576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.863883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.863951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.864240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.864306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.864672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.864749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.865000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.865072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.865351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.865417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.865662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.865731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.865999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.866069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.866362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.866428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.866779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.866854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.867128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.867196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.867402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.193 [2024-11-18 20:37:28.867468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.193 qpair failed and we were unable to recover it. 00:36:17.193 [2024-11-18 20:37:28.867710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.867779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.868042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.868111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.868340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.868370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.868487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.868515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.868742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.868811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.869113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.869180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.869481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.869547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.869792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.869860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.870145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.870179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.870305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.870333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.870560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.870626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.870954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.870984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.871136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.871185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.871447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.871514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.871805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.871874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.872077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.872145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.872394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.872461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.872715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.872783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.873080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.873148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.873414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.873480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.873729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.873797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.874072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.874139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.874442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.874508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.874809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.874878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.875090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.875158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.875347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.875415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.875709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.875778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.876076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.876144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.876447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.876513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.876765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.876835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.877069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.877136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.877364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.877394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.877539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.877569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.877853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.877922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.878220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.878287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.878544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.878613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.878860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.878930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.194 [2024-11-18 20:37:28.879176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.194 [2024-11-18 20:37:28.879244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.194 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.879552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.879619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.879858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.879889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.879986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.880014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.880232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.880299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.880559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.880626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.880860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.880930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.881223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.881291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.881551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.881618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.881890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.881957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.882192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.882259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.882545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.882623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.882959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.883027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.883265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.883333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.883622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.883709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.884002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.884069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.884362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.884429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.884723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.884792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.885099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.885166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.885416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.885484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.885740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.885810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.886071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.886100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.886258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.886310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.886560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.886626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.886942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.886972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.887104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.887140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.887258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.887286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.887486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.887556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.887876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.887945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.888195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.888264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.888488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.888555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.888833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.888903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.889118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.889185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.889421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.889488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.889785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.889855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.890130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.890160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.890272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.890300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.890493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.890560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.195 qpair failed and we were unable to recover it. 00:36:17.195 [2024-11-18 20:37:28.890858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.195 [2024-11-18 20:37:28.890927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.891169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.891223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.891353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.891382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.891589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.891674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.891971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.892001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.892125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.892153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.892367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.892433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.892733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.892800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.892998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.893067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.893302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.893368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.893653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.893722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.894014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.894081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.894389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.894455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.894752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.894831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.895086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.895153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.895411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.895441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.895559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.895586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.895748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.895779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.895891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.895919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.896071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.896102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.896378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.896445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.896695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.896723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.896850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.896878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.897001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.897029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.897172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.897201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.897465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.897529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.897798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.897864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.898139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.898209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.898507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.898574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.898896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.898975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.899277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.196 [2024-11-18 20:37:28.899344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.196 qpair failed and we were unable to recover it. 00:36:17.196 [2024-11-18 20:37:28.899652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.899722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.900010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.900077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.900303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.900370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.900581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.900670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.900969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.901037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.901341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.901408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.901707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.901777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.901994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.902077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.902342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.902410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.902723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.902792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.903089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.903156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.903444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.903511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.903799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.903830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.903950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.903978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.904143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.904210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.904429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.904497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.904762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.904793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.904892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.904920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.905042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.905070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.905163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.905239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.905528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.905595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.905914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.905981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.906237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.906318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.906580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.906668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.906895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.906962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.907216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.907284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.907501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.907570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.907843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.907911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.908209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.908276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.908573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.908672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.908936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.909003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.909292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.909359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.909665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.909734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.909982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.910049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.910296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.910365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.910546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.910614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.910938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.911006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.197 [2024-11-18 20:37:28.911234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.197 [2024-11-18 20:37:28.911264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.197 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.911351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.911380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.911611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.911650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.911801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.911831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.912025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.912093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.912340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.912407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.912607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.912697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.912960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.913026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.913315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.913384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.913633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.913723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.914025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.914092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.914381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.914449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.914671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.914743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.915044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.915111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.915382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.915413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.915527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.915555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.915665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.915694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.915842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.915908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.916027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.916056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.916286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.916316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.916471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.916501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.916769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.916838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.917143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.917211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.917494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.917562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.917914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.917982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.918240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.918318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.918568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.918634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.918877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.918947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.919179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.919209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.919368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.919398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.919548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.919617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.919910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.919978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.920230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.920299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.920553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.920621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.920883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.920952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.921251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.921317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.921609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.921714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.922022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.922052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.198 qpair failed and we were unable to recover it. 00:36:17.198 [2024-11-18 20:37:28.922149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.198 [2024-11-18 20:37:28.922177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.922307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.922335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.922536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.922604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.922825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.922893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.923118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.923147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.923246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.923275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.923394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.923423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.923675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.923744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.923958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.924026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.924234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.924301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.924546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.924613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.924935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.925002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.925290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.925357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.925603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.925690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.925965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.926032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.926281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.926349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.926615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.926703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.926941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.927008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.927307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.927374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.927673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.927741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.927997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.928064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.928356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.928422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.928725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.928793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.929010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.929077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.929319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.929387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.929678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.929746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.930049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.930116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.930357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.930436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.930692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.930762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.930965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.931035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.931267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.931337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.931629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.931714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.931931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.932000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.932264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.932331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.932629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.932677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.932835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.932865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.933139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.933206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.933492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.199 [2024-11-18 20:37:28.933558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.199 qpair failed and we were unable to recover it. 00:36:17.199 [2024-11-18 20:37:28.933835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.933903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.934201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.934268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.934558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.934627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.934925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.934993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.935262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.935292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.935414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.935442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.935658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.935730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.935994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.936063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.936357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.936423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.936676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.936746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.937031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.937097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.937269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.937343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.937599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.937664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.937776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.937804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.938006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.938036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.938186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.938216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.938491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.938560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.938862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.938930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.939220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.939288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.939587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.939667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.939962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.940029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.940275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.940344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.940609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.940657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.940804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.940869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.941172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.941238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.941542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.941609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.941823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.941891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.942151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.942218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.942506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.942573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.942853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.942934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.943164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.943231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.943533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.943599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.200 [2024-11-18 20:37:28.943872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.200 [2024-11-18 20:37:28.943940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.200 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.944187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.944254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.944497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.944563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.944827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.944897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.945205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.945272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.945505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.945535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.945663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.945692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.945912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.945942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.946088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.946117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.946328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.946394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.946629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.946667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.946798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.946826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.947063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.947130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.947393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.947462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.947712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.947780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.948062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.948130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.948418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.948487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.948768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.948837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.949138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.949206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.949495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.949561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.949794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.949862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.950051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.950120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.950406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.950473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.950765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.950796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.950889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.950918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.951021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.951049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.951281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.951348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.951604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.951689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.951999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.952066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.952366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.952433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.952683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.952753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.953055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.953085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.953232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.953262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.953417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.953485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.953712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.953743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.953823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.953851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.954000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.954027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.954177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.954256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.201 qpair failed and we were unable to recover it. 00:36:17.201 [2024-11-18 20:37:28.954519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.201 [2024-11-18 20:37:28.954587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.954897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.954964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.955264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.955331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.955658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.955727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.956029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.956097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.956339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.956406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.956632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.956717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.956981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.957011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.957135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.957163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.957319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.957385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.957696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.957765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.958064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.958130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.958383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.958451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.958754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.958823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.959115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.959182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.959430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.959497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.959747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.959817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.960078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.960146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.960447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.960514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.960779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.960848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.961147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.961214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.961783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.961817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.962046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.962075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.962170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.962198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.962347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.962374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.962472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.962500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.962667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.962712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.962804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.962837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.963056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.963111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.963266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.963320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.963466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.963495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.963614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.963654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.963781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.963874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.964023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.964078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.964313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.964367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.964489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.964520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.964612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.964649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.964784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.202 [2024-11-18 20:37:28.964815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.202 qpair failed and we were unable to recover it. 00:36:17.202 [2024-11-18 20:37:28.964970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.965001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.965106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.965138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.965273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.965305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.965437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.965469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.965605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.965663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.965786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.965815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.965959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.965988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.966091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.966122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.966257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.966288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.966449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.966480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.966608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.966651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.966782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.966813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.966965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.966996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.967119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.967151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.967307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.967338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.967440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.967471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.967575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.967605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.967743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.967775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.967930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.967962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.968056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.968087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.968263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.968292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.968414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.968455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.968630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.968673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.968911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.968972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.969205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.969234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.969383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.969411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.969595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.969625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.969773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.969802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.969941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.970019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.970254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.970310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.970439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.970470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.970570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.970617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.970746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.970775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.970924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.970956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.971114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.971145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.971268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.971301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.971427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.971459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.971577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.203 [2024-11-18 20:37:28.971609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.203 qpair failed and we were unable to recover it. 00:36:17.203 [2024-11-18 20:37:28.971794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.971840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.972015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.972048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.972150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.972181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.972340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.972372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.972483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.972515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.972689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.972720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.972811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.972854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.972969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.972998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.973163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.973192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.973310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.973338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.973465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.973494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.973652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.973692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.973837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.973868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.974133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.974162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.974312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.974340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.974458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.974489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.974627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.974665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.974845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.974921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.975105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.975161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.975316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.975385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.975518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.975550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.975779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.975811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.975939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.975970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.976132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.976164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.976325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.976357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.976514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.976545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.976648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.976692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.976789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.976820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.977071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.977127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.977288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.977343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.977473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.977509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.977613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.977674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.977874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.977939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.978101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.978157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.978331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.978397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.978498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.978529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.978692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.978763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-18 20:37:28.978937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.204 [2024-11-18 20:37:28.979007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.979114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.979145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.979296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.979328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.979455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.979486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.979650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.979683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.979858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.979925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.980053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.980083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.980182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.980213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.980334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.980366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.980498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.980529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.980653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.980683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.980824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.980870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.981011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.981044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.981171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.981200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.981309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.981338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.981438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.981466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.981598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.981629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.981782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.981815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.981988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.982047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.982244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.982302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.982429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.982459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.982578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.982609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.982748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.982780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.982917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.982949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.983113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.983145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.983274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.983304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.983508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.983540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.983666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.983704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.983829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.983858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.984047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.984113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.984408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.984473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.984676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.984720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.984821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.984852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-18 20:37:28.984977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.205 [2024-11-18 20:37:28.985031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.985241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.985305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.985504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.985535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.985649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.985678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.985785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.985815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.985945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.986008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.986211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.986279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.986573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.986672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.986801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.986831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.986968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.987055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.987343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.987409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.987695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.987727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.987828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.987857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.988086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.988148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.988474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.988539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.988785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.988816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.989000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.989066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.989288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.989353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.989555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.989620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.989809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.989840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.989945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.989985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.990164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.990227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.990529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.990594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.990822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.990853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.991018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.991085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.991328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.991393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.991584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.991615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.991799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.991830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.992064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.992129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.992364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.992429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.992691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.992722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.992855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.992886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.993068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.993099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.993252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.993315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.993616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.993707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.993832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.993861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.994050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.994113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.994357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.994422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.206 qpair failed and we were unable to recover it. 00:36:17.206 [2024-11-18 20:37:28.994599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.206 [2024-11-18 20:37:28.994630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.994776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.994806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.994907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.994995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.995295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.995360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.995663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.995725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.995879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.995943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.996162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.996192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.996371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.996436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.996698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.996730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.996859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.996889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.997088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.997152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.997398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.997465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.997632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.997669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.997804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.997833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.997966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.997998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.998235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.998300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.998614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.998713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.998839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.998870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.999001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.999031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.999203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.999267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.999549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.999615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:28.999839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:28.999886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.000035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.000098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.000326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.000383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.000509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.000540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.000662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.000704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.000832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.000864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.000998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.001030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.001153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.001183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.001316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.001349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.001510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.001543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.001662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.001696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.001854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.001884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.002024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.002052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.002181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.002213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.002338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.002366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.002468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.002500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.002661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.207 [2024-11-18 20:37:29.002696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.207 qpair failed and we were unable to recover it. 00:36:17.207 [2024-11-18 20:37:29.002834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.002890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.003077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.003126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.003229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.003260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.003384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.003415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.003541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.003578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.003689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.003720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.003880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.003921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.004048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.004079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.004205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.004237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.004396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.004429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.004556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.004590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.004726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.004758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.004888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.004919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.005118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.005188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.005484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.005550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.005754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.005827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.006148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.006215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.006489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.006555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.006773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.006805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.007008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.007074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.007255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.007315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.007485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.007547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.007714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.007746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.007953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.008011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.008253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.008306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.008437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.008469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.008620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.008663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.008817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.008879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.009088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.009120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.009342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.009391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.009521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.009552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.009722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.009779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.010018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.010069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.010306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.010361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.010470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.010500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.010627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.010667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.010850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.010912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.011063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.208 [2024-11-18 20:37:29.011141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.208 qpair failed and we were unable to recover it. 00:36:17.208 [2024-11-18 20:37:29.011305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.011339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.011477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.011507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.011645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.011678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.011794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.011826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.011952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.011984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.012104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.012134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.012257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.012295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.012458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.012490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.012620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.012656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.012771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.012804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.012935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.012967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.013121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.013153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.013263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.013294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.013454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.013486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.013574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.013604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.013786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.013818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.013944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.013977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.014101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.014133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.014263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.014296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.014454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.014486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.014647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.014743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.015021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.015092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.015394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.015461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.015681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.015713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.015847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.015879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.016135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.016201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.016464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.016531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.016774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.016806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.016990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.017056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.017345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.017411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.017694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.017725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.017852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.017882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.018015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.018046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.018248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.018315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.018561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.018629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.018789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.018818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.018948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.018978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.209 [2024-11-18 20:37:29.019222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.209 [2024-11-18 20:37:29.019277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.209 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.019504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.019567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.019811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.019841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.019992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.020056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.020319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.020350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.020601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.020702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.020830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.020860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.020980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.021010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.021116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.021195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.021483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.021560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.021814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.021845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.022038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.022069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.022366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.022431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.022651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.022691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.022793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.022825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.023006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.023070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.023358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.023425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.023686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.023718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.023876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.023916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.024161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.024228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.024474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.024542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.024771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.024803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.024908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.024940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.025039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.025092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.025328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.025394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.025575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.025605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.025716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.025747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.025885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.025926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.026127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.026192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.026445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.026512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.026744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.026777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.026878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.026907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.027033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.027065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.027322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.210 [2024-11-18 20:37:29.027389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.210 qpair failed and we were unable to recover it. 00:36:17.210 [2024-11-18 20:37:29.027625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.027663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.027759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.027789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.027896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.027925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.028122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.028188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.028474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.028539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.028772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.028803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.028930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.028960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.029113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.029191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.029429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.029461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.029631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.029666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.029821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.029852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.030088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.030154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.030418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.030450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.030720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.030752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.030881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.030910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.031204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.031280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.031568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.031634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.031832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.031865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.031993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.032024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.032126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.032198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.032462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.032528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.032752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.032801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.032940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.032974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.033175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.033249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.033477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.033510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.033650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.033681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.033807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.033838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.033926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.033956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.034076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.034106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.034204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.034234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.034387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.034419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.034549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.034581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.034711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.034742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.034841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.034871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.035001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.035034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.035194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.035227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.035388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.035421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.211 [2024-11-18 20:37:29.035519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.211 [2024-11-18 20:37:29.035547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.211 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.035650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.035714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.035966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.036032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.036323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.036389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.036627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.036702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.036884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.036949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.037118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.037175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.037362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.037424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.037581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.037613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.037849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.037907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.038092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.038153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.038311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.038343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.038443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.038473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.038596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.038628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.038817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.038874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.039050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.039113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.039288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.039346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.039475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.039505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.039633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.039685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.039921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.039975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.040212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.040265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.040368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.040398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.040556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.040588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.040688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.040720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.040952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.041022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.041220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.041273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.041406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.041438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.041599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.041631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.041871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.041925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.042121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.042181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.042301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.042355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.042468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.042501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.042610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.042663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.042891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.042944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.043130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.043184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.043340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.043371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.043479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.043510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.043646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.043679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.212 qpair failed and we were unable to recover it. 00:36:17.212 [2024-11-18 20:37:29.043857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.212 [2024-11-18 20:37:29.043922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.044155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.044206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.044327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.044358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.044492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.044524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.044653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.044685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.044814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.044847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.044999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.045031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.045263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.045319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.045477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.045509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.045649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.045682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.045872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.045928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.046124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.046179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.046300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.046331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.046457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.046488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.046610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.046649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.046827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.046882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.046978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.047010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.047224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.047291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.047418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.047450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.047577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.047608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.047751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.047788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.047950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.047981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.048077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.048109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.048231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.048263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.048435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.048480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.048649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.048718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.048968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.049035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.049288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.049353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.049594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.049693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.049826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.049857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.050045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.050099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.050282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.050337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.050430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.050461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.050587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.050618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.050769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.050838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.051016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.051083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.051316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.051369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.051495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.213 [2024-11-18 20:37:29.051527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.213 qpair failed and we were unable to recover it. 00:36:17.213 [2024-11-18 20:37:29.051692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.051725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.051860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.051900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.052034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.052067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.052226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.052257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.052388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.052420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.052549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.052582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.052727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.052760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.052860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.052900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.053037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.053069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.053197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.053229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.053382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.053414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.053547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.053580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.053713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.053745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.053871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.053905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.054035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.054065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.054185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.054215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.054370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.054401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.054563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.054594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.054732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.054763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.054918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.054982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.055266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.055331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.055619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.055698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.055894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.055972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.056189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.056256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.056440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.056509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.056695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.056726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.056849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.056879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.057042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.057075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.057242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.057277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.057377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.057410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.057499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.057532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.057668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.057716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.057841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.057871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.214 qpair failed and we were unable to recover it. 00:36:17.214 [2024-11-18 20:37:29.058054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.214 [2024-11-18 20:37:29.058086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.058225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.058259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.058389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.058431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.058571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.058605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.058795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.058827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.058979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.059011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.059147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.059181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.059289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.059321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.059474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.059504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.059650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.059703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.059836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.059866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.060014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.060044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.060184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.060215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.060372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.060402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.060503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.060533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.060668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.060707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.060845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.060892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.061028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.061063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.061184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.061218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.061379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.061410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.061516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.061548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.061714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.061747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.061878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.061920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.062055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.062087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.062178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.062210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.062312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.062344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.062503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.062535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.062663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.062702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.062835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.062866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.062984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.063064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.063234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.063265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.063399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.063432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.063600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.063633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.063810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.063842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.063996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.064028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.064119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.064152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.064307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.064338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.064466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.215 [2024-11-18 20:37:29.064497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.215 qpair failed and we were unable to recover it. 00:36:17.215 [2024-11-18 20:37:29.064589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.064621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.064788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.064820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.064924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.065004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.065220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.065285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.065560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.065626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.065815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.065846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.066064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.066129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.066399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.066464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.066659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.066692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.066846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.066877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.067015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.067045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.067127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.067156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.067418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.067483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.067705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.067737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.067836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.067867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.067969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.067999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.068103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.068133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.068251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.068281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.068414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.068449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.068665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.068727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.068845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.068876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.069004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.069034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.069163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.069194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.069349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.069414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.069623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.069701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.069830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.069860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.069958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.069989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.070191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.070258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.070510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.070540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.070708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.070740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.070844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.070874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.071034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.071065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.071266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.071332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.071609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.071694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.071797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.071825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.071925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.071955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.072158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.072223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.072454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.216 [2024-11-18 20:37:29.072516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.216 qpair failed and we were unable to recover it. 00:36:17.216 [2024-11-18 20:37:29.072697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.072728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.072829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.072859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.073008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.073101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.073371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.073440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.073725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.073758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.073916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.073997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.074194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.074261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.074519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.074586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.074746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.074777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.074904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.074934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.075057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.075123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.075413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.075478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.075662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.075701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.075830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.075859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.076115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.076179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.076454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.076519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.076742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.076773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.076875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.076908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.077161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.077229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.077430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.077498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.077753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.077789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.077952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.077983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.078080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.078138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.078329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.078392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.078631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.078722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.078857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.078886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.078980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.079054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.079354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.079418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.079622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.079711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.080000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.080065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.080315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.080381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.080681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.080748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.081030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.081096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.081400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.081465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.081789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.081856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.082092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.082157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.082394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.082459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.217 qpair failed and we were unable to recover it. 00:36:17.217 [2024-11-18 20:37:29.082759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.217 [2024-11-18 20:37:29.082827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.083081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.083145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.083442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.083507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.083759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.083827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.084077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.084143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.084352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.084418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.084707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.084773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.085025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.085093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.085352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.085418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.085677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.085743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.086053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.086118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.086408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.086474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.086772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.086839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.087135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.087200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.087494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.087560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.087871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.087938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.088191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.088255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.088552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.088617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.088915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.088981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.089220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.089285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.089568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.089634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.089913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.089980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.090269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.090334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.090572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.090676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.090986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.091051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.091295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.091363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.091584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.091671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.091963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.092029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.092327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.092393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.092690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.092757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.093016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.093081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.093337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.093403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.093628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.093708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.093919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.093984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.094224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.094291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.094580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.094672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.094922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.094988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.095290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.218 [2024-11-18 20:37:29.095355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.218 qpair failed and we were unable to recover it. 00:36:17.218 [2024-11-18 20:37:29.095591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.095670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.095935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.096000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.096201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.096269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.096524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.096589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.096867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.096933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.097181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.097248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.097489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.097556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.097874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.097941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.098232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.098298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.098537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.098605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.098884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.098950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.099257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.099323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.099628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.099712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.100016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.100082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.100376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.100442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.100694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.100760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.101048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.101114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.101415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.101480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.101717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.101783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.102061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.102126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.102423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.102488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.102787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.102853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.103147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.103212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.103454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.103519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.103777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.103844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.104136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.104212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.104514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.104579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.104852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.104918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.105205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.105270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.105565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.105629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.105943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.106008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.106305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.106370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.106699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.106765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.219 qpair failed and we were unable to recover it. 00:36:17.219 [2024-11-18 20:37:29.107043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.219 [2024-11-18 20:37:29.107108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.107350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.107414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.107676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.107743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.107956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.108021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.108245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.108312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.108603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.108684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.108961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.109026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.109315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.109379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.109649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.109716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.109977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.110041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.110338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.110402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.110675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.110741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.110970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.111035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.111289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.111354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.111666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.111734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.112042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.112107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.112401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.112467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.112734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.112803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.113094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.113160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.113410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.113474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.113776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.113841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.114086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.114151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.114364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.114430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.114691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.114758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.115052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.115118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.115407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.115471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.115757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.115823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.116055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.116122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.116391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.116456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.116711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.116778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.117036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.117101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.117398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.117464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.117767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.117842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.118130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.118194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.118486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.118551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.118823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.118890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.119156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.119220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.220 [2024-11-18 20:37:29.119463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.220 [2024-11-18 20:37:29.119530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.220 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.119795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.119861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.120109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.120173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.120426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.120490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.120790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.120857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.121140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.121204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.121497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.121562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.121839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.121906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.122134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.122198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.122510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.122576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.122888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.122956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.123200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.123266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.123523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.123589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.123864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.123928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.124163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.124228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.124454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.124518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.124762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.124829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.125084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.125148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.125396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.125462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.125752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.125817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.126103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.126167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.126455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.126520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.126788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.126855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.127114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.127180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.127389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.127454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.127693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.127759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.128044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.128108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.128399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.128463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.128721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.128787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.129053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.129117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.129364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.129431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.129685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.129751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.129968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.130033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.130272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.130337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.130583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.130679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.130941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.131015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.131310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.131375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.131618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.131705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.221 [2024-11-18 20:37:29.132012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.221 [2024-11-18 20:37:29.132077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.221 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.132381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.132446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.132687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.132754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.133003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.133068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.133371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.133435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.133648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.133715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.133969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.134033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.134275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.134340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.134568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.134634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.134977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.135041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.135334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.135398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.135662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.135749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.135996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.136061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.136347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.136412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.136701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.136767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.137070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.137135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.137435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.137499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.137799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.137865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.138117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.138181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.138466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.138530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.138848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.138914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.139211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.139275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.139583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.139661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.139923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.139988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.140282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.140348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.140592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.140675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.140967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.141032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.141316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.141381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.141631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.141714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.141940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.142005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.142303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.142367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.142616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.142711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.142982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.143046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.143335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.143399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.143672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.143743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.144033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.144098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.144290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.144355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.144615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.222 [2024-11-18 20:37:29.144709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.222 qpair failed and we were unable to recover it. 00:36:17.222 [2024-11-18 20:37:29.145002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.145069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.145383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.145448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.145758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.145824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.146039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.146105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.146349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.146414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.146686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.146754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.147056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.147121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.147428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.147462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.147601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.147643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.147786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.147821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.147990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.148025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.148146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.148180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.148289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.148323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.148499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.148533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.148673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.148709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.148849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.148883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.149025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.149059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.149251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.149317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.149509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.149543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.149713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.149749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.149883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.149917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.150045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.150079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.150220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.150285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.150546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.150581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.150809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.150843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.150960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.150995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.151155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.151209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.151350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.151385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.151631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.151673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.151776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.151811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.151948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.151982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.152225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.152291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.152531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.152599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.223 [2024-11-18 20:37:29.152810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.223 [2024-11-18 20:37:29.152844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.223 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.153017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.153051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.153295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.153362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.153593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.153628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.153779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.153814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.154018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.154083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.154368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.154444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.154697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.154733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.154872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.154907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.155151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.155215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.155451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.155516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.155791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.155826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.156023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.156088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.156320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.156386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.156627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.156706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.156822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.156856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.156989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.157023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.157251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.157317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.157557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.157616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.157793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.157827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.157982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.158047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.158219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.158253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.158388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.158422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.158582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.158668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.158832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.158866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.158973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.159008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.159144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.159178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.159320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.159354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.159585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.159619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.159770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.159804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.159920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.159955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.160072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.160107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.160242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.160276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.160506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.160572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.160791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.160828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.161042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.161108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.161343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.161409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.161664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.161699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.224 qpair failed and we were unable to recover it. 00:36:17.224 [2024-11-18 20:37:29.161832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.224 [2024-11-18 20:37:29.161866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.162010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.162045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.162260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.162326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.162616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.162711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.162878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.162912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.163080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.163145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.163437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.163502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.163742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.163777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.163978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.164053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.164352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.164406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.164572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.164657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.164850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.164885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.165022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.165056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.165246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.165282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.165483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.165549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.165767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.165802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.165977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.166011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.166160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.166232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.166473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.166543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.166736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.166772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.166941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.166975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.167118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.167152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.167427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.167492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.167782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.167817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.167955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.168028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.168300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.168335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.168469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.168503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.168727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.168762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.168905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.168939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.169105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.169139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.169343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.169405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.169662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.169699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.169813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.169848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.170064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.170129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.170413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.170448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.170600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.170654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.170831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.170866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.170976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.225 [2024-11-18 20:37:29.171011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.225 qpair failed and we were unable to recover it. 00:36:17.225 [2024-11-18 20:37:29.171154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.171188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.171370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.171436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.171625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.171711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.171971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.172036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.172250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.172318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.172600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.172634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.172743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.172776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.172920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.172955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.173088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.173122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.173292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.173326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.173463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.173498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.173730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.173796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.174051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.174086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.174188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.174223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.174457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.174517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.174774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.174809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.174984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.175018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.175154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.175189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.175441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.175475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.175627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.175686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.175978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.176013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.176185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.176219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.176315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.176349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.176499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.176559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.176904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.176986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.177189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.177267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.177497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.177558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.177840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.177921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.178125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.178160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.178332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.178367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.226 [2024-11-18 20:37:29.178503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.226 [2024-11-18 20:37:29.178539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.226 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.178895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.178931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.179075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.179111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.179358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.179419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.179633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.179712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.179938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.179972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.180138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.180173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.180389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.180459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.180680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.180742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.181007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.181069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.181357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.181391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.181529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.181563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.181826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.181888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.182098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.182132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.182266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.182300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.182449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.182484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.182671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.182738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.182983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.183017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.183191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.183261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.183539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.183598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.183798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.183855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.183997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.184031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.184308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.184385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.184607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.184686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.184929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.185006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.185277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.185356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.185568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.185629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.185918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.185952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.186049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.186083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.186222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.186258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.186425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.186459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.186599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.186633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.186793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.186860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.187096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.187130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.187277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.187312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.187566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.187600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.187756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.187791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.187929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.187964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.188168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.188230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.188454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.188514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.188818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.188853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.188990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.189024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.189288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.189342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.189547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.189608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.189857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.189935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.190183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.190259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.190445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.190505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.190776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.190866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.191117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.501 [2024-11-18 20:37:29.191151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.501 qpair failed and we were unable to recover it. 00:36:17.501 [2024-11-18 20:37:29.191292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.191326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.191437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.191471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.191604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.191646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.191786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.191820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.191988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.192048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.192289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.192324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.192428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.192463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.192609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.192652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.192922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.192982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.193247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.193306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.193582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.193659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.193952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.194030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.194278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.194355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.194688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.194750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.195023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.195102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.195366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.195444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.195728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.195807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.195987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.196021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.196194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.196254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.196456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.196515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.196768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.196803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.196943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.196978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.197116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.197149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.197285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.197319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.197465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.197517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.197828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.197907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.198141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.198220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.198490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.198549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.198807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.198842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.198954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.198988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.199123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.199158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.199272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.199306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.199535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.199596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.199876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.199953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.200190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.200224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.200335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.200370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.200511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.200545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.502 [2024-11-18 20:37:29.200681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.502 [2024-11-18 20:37:29.200717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.502 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.200860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.200901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.201108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.201170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.201282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.201316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.201509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.201543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.201663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.201698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.201813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.201848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.201994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.202029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.202168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.202203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.202343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.202377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.202553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.202587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.202893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.202953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.203190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.203270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.203515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.203549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.203719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.203754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.203956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.204035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.204278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.204312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.204417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.204452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.204605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.204681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.204983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.205057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.205340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.205415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.205595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.205665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.205907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.205942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.206070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.206104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.206318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.206352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.206493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.206527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.206655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.206690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.206941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.207018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.207250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.207326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.207561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.207596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.207794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.207875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.208158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.208192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.208334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.208368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.208550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.208606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.208911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.208988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.209276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.209310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.209452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.209486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.209619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.503 [2024-11-18 20:37:29.209662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.503 qpair failed and we were unable to recover it. 00:36:17.503 [2024-11-18 20:37:29.209808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.209843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.210071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.210105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.210232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.210267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.210448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.210513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.210774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.210833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.211120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.211194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.211372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.211428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.211605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.211674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.211933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.211989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.212249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.212323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.212531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.212565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.212734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.212769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.213012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.213086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.213305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.213380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.213595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.213663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.213881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.213916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.214085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.214120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.214273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.214307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.214444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.214494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.214698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.214733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.214870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.214905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.215121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.215202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.215419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.215475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.215691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.215727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.215841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.215876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.215983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.216019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.216239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.216295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.216534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.216568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.216740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.216775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.217008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.217042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.217215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.217249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.217416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.217481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.217712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.217748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.217853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.217887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.218042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.218077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.218328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.218384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.218664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.218721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.504 qpair failed and we were unable to recover it. 00:36:17.504 [2024-11-18 20:37:29.218976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.504 [2024-11-18 20:37:29.219033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.219310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.219384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.219597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.219670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.219960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.219994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.220167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.220202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.220352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.220414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.220584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.220664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.220926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.220960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.221125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.221160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.221338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.221372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.221488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.221523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.221675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.221711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.221857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.221891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.222052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.222110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.222314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.222370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.222543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.222599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.222828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.222885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.223138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.223172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.223307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.223342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.223526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.223582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.223841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.223876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.224044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.224078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.224267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.224324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.224510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.224565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.224787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.224841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.224986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.225020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.225212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.225267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.225509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.225544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.225671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.225707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.225964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.226039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.226283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.226317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.226461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.505 [2024-11-18 20:37:29.226495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.505 qpair failed and we were unable to recover it. 00:36:17.505 [2024-11-18 20:37:29.226614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.226662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.227018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.227053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.227189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.227225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.227400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.227434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.227574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.227609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.227808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.227843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.227986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.228021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.228196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.228252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.228518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.228552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.228692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.228727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.228866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.228900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.229065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.229128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.229378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.229434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.229678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.229713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.229857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.229897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.230191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.230266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.230479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.230535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.230783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.230840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.231057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.231116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.231342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.231376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.231547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.231581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.231727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.231762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.231870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.231904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.232130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.232204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.232453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.232487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.232662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.232697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.232928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.233001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.233190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.233265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.233530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.233586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.233847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.233922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.234177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.234211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.234357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.234391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.234617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.234691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.234894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.234973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.235224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.235258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.235426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.235487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.235678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.235730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.506 [2024-11-18 20:37:29.235907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.506 [2024-11-18 20:37:29.235941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.506 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.236057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.236092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.236265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.236299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.236465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.236521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.236772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.236849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.237112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.237146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.237288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.237322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.237556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.237611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.237916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.237990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.238281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.238355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.238561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.238618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.238913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.238969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.239222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.239256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.239423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.239481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.239720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.239813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.240112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.240185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.240408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.240463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.240623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.240699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.240937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.240971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.241112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.241146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.241363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.241441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.241730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.241804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.242100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.242174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.242434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.242491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.242765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.242799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.242932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.242966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.243141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.243209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.243464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.243519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.243791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.243827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.243971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.244028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.244295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.244329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.244483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.244535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.244811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.244885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.245170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.245244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.245458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.245514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.248829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.248882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.249104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.249165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.249428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.249463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.507 qpair failed and we were unable to recover it. 00:36:17.507 [2024-11-18 20:37:29.249605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.507 [2024-11-18 20:37:29.249648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.249905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.249961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.250136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.250193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.250483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.250558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.250773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.250832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.251116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.251190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.251445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.251479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.251704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.251763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.252022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.252097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.252390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.252424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.252601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.252666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.252923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.252958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.253130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.253195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.253436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.253507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.253682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.253740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.253991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.254066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.254363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.254435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.254667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.254726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.255020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.255096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.255335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.255420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.255685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.255743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.255980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.256055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.256291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.256368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.256633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.256702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.256939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.256973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.257145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.257180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.257433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.257506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.257726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.257761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.257899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.257934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.258202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.258275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.258461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.258516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.258794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.258869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.259104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.259178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.259440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.259495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.259711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.259770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.260071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.260145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.260442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.260516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.260771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.508 [2024-11-18 20:37:29.260845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.508 qpair failed and we were unable to recover it. 00:36:17.508 [2024-11-18 20:37:29.261139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.261214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.261491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.261525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.261666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.261701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.261955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.262029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.262312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.262346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.262485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.262519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.262805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.262880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.263171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.263245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.263516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.263572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.263854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.263929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.264155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.264228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.264478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.264534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.264821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.264897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.265180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.265252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.265461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.265516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.265774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.265833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.266103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.266138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.266284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.266318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.266439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.266474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.266615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.266685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.266976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.267010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.267154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.267194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.267333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.267367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.267513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.267566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.267866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.267941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.268177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.268251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.268470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.268525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.268813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.268889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.269173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.269208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.269339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.269373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.269573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.269628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.269926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.270011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.509 [2024-11-18 20:37:29.270305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.509 [2024-11-18 20:37:29.270338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.509 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.270506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.270561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.270810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.270868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.271155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.271230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.271438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.271493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.271744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.271819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.272110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.272183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.272407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.272463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.272708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.272785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.273010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.273088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.273298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.273372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.273619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.273687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.273972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.274046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.274302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.274337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.274485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.274519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.274652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.274687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.275004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.275087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.275333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.275406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.275674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.275710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.275849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.275884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.276153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.276187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.276331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.276365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.276554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.276608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.276845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.276919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.277166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.277241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.277495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.277551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.277833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.277868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.278034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.278069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.278353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.278427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.278662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.278729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.278937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.279011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.279294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.279367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.279617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.279705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.279977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.280051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.280284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.280318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.280455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.280490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.280706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.280742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.280854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.510 [2024-11-18 20:37:29.280889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.510 qpair failed and we were unable to recover it. 00:36:17.510 [2024-11-18 20:37:29.281019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.281053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.281295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.281352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.281547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.281603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.281831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.281888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.282104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.282160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.282418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.282452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.282592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.282627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.282916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.282972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.283223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.283257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.283394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.283444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.283681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.283737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.283955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.284030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.284321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.284395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.284616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.284688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.284897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.284952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.285180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.285252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.285468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.285524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.285775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.285851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.286199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.286299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.286574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.286610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.286749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.286785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.287065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.287131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.287459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.287523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.287768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.287825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.288077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.288143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.288436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.288501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.288813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.288870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.289074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.289138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.289425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.289490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.289746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.289802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.290020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.290076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.290364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.290429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.290702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.290759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.291000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.291064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.291303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.291369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.291626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.291718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.291941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.291992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.292164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.511 [2024-11-18 20:37:29.292198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.511 qpair failed and we were unable to recover it. 00:36:17.511 [2024-11-18 20:37:29.292477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.292511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.292700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.292758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.293036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.293101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.293394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.293428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.293571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.293606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.293866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.293923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.294190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.294255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.294528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.294570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.294689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.294724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.294984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.295048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.295310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.295342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.295460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.295494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.295806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.295859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.296125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.296187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.296474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.296507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.296651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.296683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.296830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.296882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.297124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.297186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.297479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.297541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.297851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.297908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.298214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.298269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.298571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.298656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.298911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.298965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.299077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.299111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.299295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.299361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.299504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.299541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.299760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.299818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.300054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.300119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.300396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.300461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.300729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.300764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.300882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.300916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.301180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.301244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.301580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.301616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.301766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.301800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.302015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.302089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.302339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.302374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.302488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.302522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.302693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.512 [2024-11-18 20:37:29.302728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.512 qpair failed and we were unable to recover it. 00:36:17.512 [2024-11-18 20:37:29.302949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.303014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.303266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.303331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.303627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.303676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.303958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.304026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.304254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.304320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.304519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.304583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.304911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.304946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.305115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.305150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.305355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.305419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.305658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.305693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.305869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.305926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.306179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.306242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.306535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.306599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.306904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.306970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.307270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.307338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.307654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.307721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.308009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.308074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.308327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.308361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.308529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.308581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.308835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.308902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.309126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.309191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.309436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.309470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.309633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.309677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.309880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.309954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.310239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.310304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.310594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.310678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.310979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.311043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.311357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.311421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.311722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.311788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.312031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.312095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.312297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.312361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.312594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.312627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.312813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.312847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.313137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.313201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.313430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.313494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.313788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.313853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.314064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.314128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.314394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.513 [2024-11-18 20:37:29.314428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.513 qpair failed and we were unable to recover it. 00:36:17.513 [2024-11-18 20:37:29.314542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.314576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.314785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.314852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.315101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.315166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.315444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.315509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.315774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.315839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.316094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.316158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.316412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.316475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.316750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.316815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.317061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.317125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.317378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.317442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.317656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.317722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.317962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.317996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.318140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.318174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.318414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.318479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.318734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.318790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.318933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.318967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.319243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.319277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.319389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.319422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.319569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.319603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.319732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.319767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.320071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.320105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.320250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.320284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.320448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.320482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.320692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.320779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.321092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.321157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.321452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.321516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.321815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.321889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.322116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.322150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.322290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.322355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.322581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.322669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.514 qpair failed and we were unable to recover it. 00:36:17.514 [2024-11-18 20:37:29.322970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.514 [2024-11-18 20:37:29.323004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.323119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.323154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.323328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.323391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.323649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.323715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.323950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.323985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.324124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.324159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.324380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.324414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.324534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.324569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.324749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.324784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.324905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.324939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.325145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.325211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.325460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.325523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.325738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.325804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.326067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.326101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.326245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.326279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.326489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.326556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.326872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.326938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.327183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.327248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.327461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.327525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.327792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.327857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.328069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.328133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.328371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.328434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.328724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.328791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.329083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.329158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.329410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.329474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.329696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.329763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.330013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.330065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.330203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.330237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.330428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.330492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.330746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.330812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.331094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.331157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.331462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.331526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.331790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.331857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.332144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.332207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.332487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.332521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.332688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.332723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.333039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.333103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.333402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.333466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.515 [2024-11-18 20:37:29.333761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.515 [2024-11-18 20:37:29.333826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.515 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.334073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.334137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.334419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.334483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.334765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.334830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.335112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.335174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.335463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.335527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.335808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.335843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.336010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.336081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.336370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.336434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.336722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.336788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.337087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.337151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.337449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.337512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.337805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.337881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.338143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.338207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.338417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.338481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.338730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.338795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.339091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.339155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.339394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.339456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.339735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.339800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.340008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.340073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.340362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.340425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.340672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.340738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.341025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.341059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.341206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.341239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.341446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.341510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.341803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.341869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.342128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.342192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.342442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.342506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.342805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.342870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.343133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.343167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.343359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.343423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.343735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.343800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.344059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.344122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.344409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.344473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.344723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.344789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.345046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.345110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.345399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.345462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.345764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.345829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.346115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.516 [2024-11-18 20:37:29.346149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.516 qpair failed and we were unable to recover it. 00:36:17.516 [2024-11-18 20:37:29.346257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.346290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.346502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.346567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.346838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.346904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.347186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.347250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.347533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.347567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.347711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.347745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.347935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.347969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.348112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.348146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.348329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.348393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.348688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.348754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.349003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.349070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.349284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.349348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.349579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.349615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.349767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.349802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.350039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.350103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.350390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.350454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.350705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.350740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.350837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.350871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.351037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.351071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.351302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.351367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.351611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.351688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.351924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.351988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.352278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.352342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.352588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.352685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.352946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.353010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.353258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.353321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.353536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.353600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.353913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.353947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.354083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.354117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.354396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.354460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.354746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.354780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.354899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.354932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.355135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.355199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.355492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.355556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.355823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.355888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.356123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.356187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.356448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.356482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.356618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.356659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.517 qpair failed and we were unable to recover it. 00:36:17.517 [2024-11-18 20:37:29.356911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.517 [2024-11-18 20:37:29.356976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.357267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.357332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.357619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.357695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.357986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.358059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.358343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.358409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.358702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.358768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.358964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.359028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.359276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.359341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.359599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.359677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.359927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.359993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.360254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.360318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.360599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.360696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.360999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.361065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.361303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.361367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.361591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.361670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.361921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.361985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.362264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.362298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.362403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.362437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.362624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.362706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.362921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.362985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.363211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.363245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.363414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.363449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.363754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.363818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.364056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.364122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.364407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.364473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.364767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.364832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.365070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.365134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.365413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.365477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.365771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.365837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.366129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.366193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.366477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.366550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.366806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.366872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.367166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.367231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.367520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.367553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.367724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.367793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.368032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.368066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.368211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.368245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.368386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.518 [2024-11-18 20:37:29.368420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.518 qpair failed and we were unable to recover it. 00:36:17.518 [2024-11-18 20:37:29.368689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.368724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.368840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.368874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.369040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.369074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.369268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.369333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.369611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.369689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.369934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.369999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.370299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.370365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.370648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.370683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.370796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.370830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.371031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.371097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.371304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.371368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.371621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.371703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.371996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.372060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.372270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.372333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.372579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.372677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.372888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.372952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.373201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.373266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.373522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.373586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.373909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.373973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.374238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.374277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.374430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.374464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.374709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.374774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.375016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.375081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.375363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.375399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.375573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.375607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.375915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.375982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.376223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.376257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.376375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.376421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.376588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.376645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.376792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.376829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.376970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.377004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.377145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.377179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.377325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.377360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.519 qpair failed and we were unable to recover it. 00:36:17.519 [2024-11-18 20:37:29.377534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.519 [2024-11-18 20:37:29.377568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.377737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.377771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.377911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.377944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.378047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.378081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.378246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.378280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.378463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.378503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.378660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.378704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.378822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.378857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.378996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.379030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.379122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.379156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.379262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.379296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.379474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.379508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.379616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.379657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.379809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.379843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.380017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.380051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.380220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.380254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.380355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.380390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.380483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.380517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.380664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.380699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.380844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.380878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.381016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.381049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.381177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.381211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.381356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.381390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.381563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.381627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.381938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.381972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.382125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.382161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.382301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.382335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.382568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.382608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.382897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.382962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.383152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.383215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.383513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.383577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.383901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.383968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.384269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.384333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.384586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.384669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.384962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.385028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.385278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.385341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.385624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.385706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.385977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.386043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.520 [2024-11-18 20:37:29.386348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.520 [2024-11-18 20:37:29.386412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.520 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.386630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.386712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.386956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.387020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.387314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.387378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.387632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.387714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.387972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.388036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.388248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.388313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.388589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.388689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.388954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.389020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.389217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.389280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.389572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.389656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.389916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.389981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.390227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.390290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.390510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.390574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.390817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.390883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.391115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.391179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.391418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.391492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.391788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.391854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.392163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.392227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.392509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.392573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.392881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.392947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.393268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.393332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.393615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.393692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.393984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.394047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.394342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.394406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.394707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.394772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.395029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.395093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.395399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.395463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.395716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.395782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.396045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.396109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.396414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.396479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.396778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.396845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.397141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.397206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.397498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.397562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.397809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.397888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.398119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.398187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.398562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.398630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.398953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.399025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.521 qpair failed and we were unable to recover it. 00:36:17.521 [2024-11-18 20:37:29.399301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.521 [2024-11-18 20:37:29.399380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.399696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.399764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.400052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.400119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.400418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.400489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.400884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.400952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.401211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.401310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.401573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.401653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.401948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.402014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.402266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.402331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.402605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.402684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.402940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.403020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.403268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.403334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.403597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.403682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.404004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.404070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.404312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.404379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.404678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.404756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.404973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.405039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.405324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.405396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.405664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.405732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.405989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.406058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.406273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.406339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.406674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.406754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.407014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.407080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.407375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.407438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.407746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.407814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.408042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.408123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.408417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.408483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.408774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.408842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.409160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.409224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.409499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.409572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.409856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.409918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.410184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.410249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.410553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.410619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.410917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.410999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.411261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.411328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.411652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.411720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.412026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.412092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.412353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.522 [2024-11-18 20:37:29.412419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.522 qpair failed and we were unable to recover it. 00:36:17.522 [2024-11-18 20:37:29.412703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.412771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.413025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.413100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.413401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.413465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.413733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.413801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.414060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.414126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.414374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.414442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.414742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.414809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.415108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.415174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.415424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.415507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.415808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.415874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.416143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.416209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.416505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.416571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.416839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.416906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.417184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.417250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.417516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.417580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.417858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.417924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.418218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.418282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.418540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.418604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.418924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.418988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.419227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.419286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.419525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.419584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.419866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.419931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.420199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.420264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.420520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.420584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.420924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.420990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.421298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.421357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.421598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.421681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.421972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.422035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.422297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.422361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.422667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.422733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.423018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.423083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.423315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.423379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.423620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.423701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.423993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.523 [2024-11-18 20:37:29.424056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.523 qpair failed and we were unable to recover it. 00:36:17.523 [2024-11-18 20:37:29.424356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.424420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.424713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.424788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.425091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.425151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.425401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.425465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.425695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.425760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.426013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.426078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.426373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.426436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.426743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.426808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.427113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.427172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.427367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.427426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.427586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.427682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.427936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.428000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.428286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.428350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.428608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.428693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.429010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.429070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.429382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.429447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.429741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.429806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.430096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.430160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.430448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.430513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.430814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.430879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.431073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.431136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.431424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.431488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.431727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.431793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.432074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.432138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.432391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.432455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.432688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.432749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.433011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.433076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.433358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.433421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.433749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.433825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.434117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.434182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.434426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.434490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.434798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.434858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.435107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.435172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.435457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.435521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.435727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.435793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.436086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.436151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.436396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.436461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.436725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.524 [2024-11-18 20:37:29.436785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.524 qpair failed and we were unable to recover it. 00:36:17.524 [2024-11-18 20:37:29.437071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.437136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.437416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.437480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.437727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.437794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.438016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.438080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.438287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.438352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.438601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.438679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.438969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.439033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.439277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.439344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.439601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.439685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.439915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.439979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.440227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.440291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.440584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.440671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.440921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.440985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.441272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.441337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.441585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.441666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.441904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.441969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.442231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.442295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.442581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.442671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.442977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.443036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.443272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.443331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.443653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.443719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.443939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.444004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.444246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.444312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.444606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.444689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.444947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.445012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.445314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.445377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.445675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.445741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.445984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.446049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.446265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.446330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.446552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.446616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.446825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.446891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.447185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.447249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.447467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.447531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.447847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.447914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.448190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.448254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.448532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.448596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.448874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.448939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.449192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.525 [2024-11-18 20:37:29.449257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.525 qpair failed and we were unable to recover it. 00:36:17.525 [2024-11-18 20:37:29.449552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.449618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.449934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.449998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.450298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.450362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.450660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.450726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.450931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.450995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.451263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.451322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.451576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.451657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.451929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.451993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.452272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.452335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.452543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.452607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.452911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.452976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.453218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.453282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.453479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.453543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.453786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.453851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.454140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.454205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.454490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.454554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.454866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.454932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.455174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.455239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.455528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.455592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.455866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.455930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.456182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.456257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.456516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.456579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.456870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.456936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.457230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.457295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.457535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.457599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.457876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.457939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.458195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.458259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.458742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.458808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.459091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.459154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.459363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.459428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.459671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.459737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.459985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.460049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.460343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.460408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.460707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.460773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.461075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.461139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.461392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.461456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.461701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.461767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.462040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.526 [2024-11-18 20:37:29.462098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.526 qpair failed and we were unable to recover it. 00:36:17.526 [2024-11-18 20:37:29.462322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.462382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.462606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.462684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.462971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.463034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.463322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.463386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.463652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.463719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.463989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.464069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.464361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.464425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.464692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.464757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.465049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.465113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.465419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.465494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.465797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.465856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.466092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.466152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.466426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.466486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.466698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.466758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.466970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.467036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.467296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.467363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.467737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.467802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.468098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.468163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.468446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.468510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.468748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.468813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.469021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.469087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.469377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.469442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.469697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.469762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.469993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.470057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.470326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.470389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.470633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.470711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.470972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.471037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.471333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.471396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.471664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.471729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.471985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.472049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.472344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.472408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.472692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.472758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.473016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.473080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.473330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.473393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.473599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.473680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.473979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.474043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.474331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.474405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.474614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.474697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.527 qpair failed and we were unable to recover it. 00:36:17.527 [2024-11-18 20:37:29.474930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.527 [2024-11-18 20:37:29.474995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.475300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.475363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.475613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.475696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.475949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.476013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.476303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.476366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.476619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.476699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.476955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.477020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.477211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.477274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.477570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.477633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.477967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.478030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.478240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.478305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.478525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.478588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.478905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.478970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.479202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.479266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.479515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.479580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.479825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.479891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.480186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.480249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.480545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.480609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.480932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.480996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.481240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.481304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.481588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.481670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.481924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.481989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.482263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.482326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.482604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.482686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.482974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.483038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.483291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.483349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.483589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.483665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.483986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.484051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.484353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.484417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.484701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.484767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.485067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.485131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.485431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.485495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.485794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.485860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.486105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.528 [2024-11-18 20:37:29.486169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.528 qpair failed and we were unable to recover it. 00:36:17.528 [2024-11-18 20:37:29.486464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.486523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.486741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.486802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.486977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.487057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.487353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.487417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.487680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.487746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.488051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.488110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.488402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.488469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.488737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.488802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.489062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.489126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.489407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.489466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.489663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.489725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.489995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.490055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.490382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.490441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.490741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.490821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.491121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.491188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.491438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.491498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.491717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.491779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.492049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.492113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.492416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.492489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.529 [2024-11-18 20:37:29.492858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.529 [2024-11-18 20:37:29.492921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.529 qpair failed and we were unable to recover it. 00:36:17.811 [2024-11-18 20:37:29.493243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.811 [2024-11-18 20:37:29.493309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.811 qpair failed and we were unable to recover it. 00:36:17.811 [2024-11-18 20:37:29.493550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.811 [2024-11-18 20:37:29.493614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.811 qpair failed and we were unable to recover it. 00:36:17.811 [2024-11-18 20:37:29.493968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.811 [2024-11-18 20:37:29.494027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.811 qpair failed and we were unable to recover it. 00:36:17.811 [2024-11-18 20:37:29.494300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.811 [2024-11-18 20:37:29.494362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.811 qpair failed and we were unable to recover it. 00:36:17.811 [2024-11-18 20:37:29.494587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.811 [2024-11-18 20:37:29.494675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.811 qpair failed and we were unable to recover it. 00:36:17.811 [2024-11-18 20:37:29.494937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.811 [2024-11-18 20:37:29.494997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.811 qpair failed and we were unable to recover it. 00:36:17.811 [2024-11-18 20:37:29.495200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.811 [2024-11-18 20:37:29.495261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.811 qpair failed and we were unable to recover it. 00:36:17.811 [2024-11-18 20:37:29.495542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.811 [2024-11-18 20:37:29.495608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.811 qpair failed and we were unable to recover it. 00:36:17.811 [2024-11-18 20:37:29.495903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.811 [2024-11-18 20:37:29.495963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.811 qpair failed and we were unable to recover it. 00:36:17.811 [2024-11-18 20:37:29.496174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.811 [2024-11-18 20:37:29.496246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.811 qpair failed and we were unable to recover it. 00:36:17.811 [2024-11-18 20:37:29.496478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.496538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.496839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.496934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.497238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.497316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.497533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.497595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.497819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.497882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.498143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.498205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.498430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.498492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.498699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.498762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.499001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.499064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.499333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.499393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.499631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.499708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.499981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.500043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.500268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.500330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.500564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.500625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.500908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.500968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.501204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.501265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.501550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.501612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.501865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.501926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.502162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.502223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.502426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.502486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.502711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.502774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.503043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.503103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.503305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.503367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.503586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.503657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.503838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.503898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.504133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.504198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.504432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.504493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.504762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.504824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.505097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.505159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.505412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.505474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.505680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.505744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.812 qpair failed and we were unable to recover it. 00:36:17.812 [2024-11-18 20:37:29.505982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.812 [2024-11-18 20:37:29.506044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.506244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.506307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.506556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.506620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.506884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.506947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.507186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.507247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.507476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.507538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.507827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.507890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.508122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.508183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.508446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.508507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.508732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.508795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.509034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.509094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.509335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.509406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.509685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.509746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.509927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.509987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.510221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.510282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.510510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.510570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.510858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.510920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.511151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.511212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.511449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.511511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.511696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.511759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.511959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.512023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.512295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.512356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.512628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.512707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.512981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.513042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.513283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.513344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.513620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.513714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.513955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.514017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.514242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.514302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.514545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.514606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.514899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.514961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.515248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.515309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.515542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.515607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.515909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.515971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.516235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.516296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.516485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.516550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.516842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.516907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.517186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.517247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.813 [2024-11-18 20:37:29.517478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.813 [2024-11-18 20:37:29.517539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.813 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.517780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.517843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.518078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.518139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.518421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.518484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.518717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.518780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.518963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.519027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.519265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.519327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.519502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.519566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.519847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.519910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.520133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.520194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.520466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.520527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.520743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.520805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.521037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.521101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.521374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.521434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.521705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.521778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.522009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.522070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.522299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.522361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.522602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.522690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.522968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.523030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.523274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.523336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.523570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.523630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.523840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.523903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.524137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.524202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.524435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.524497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.524735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.524799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.525042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.525104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.525358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.525419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.525632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.525707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.525953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.526016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.526282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.526344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.526614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.526689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.526927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.526988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.527251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.527312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.814 qpair failed and we were unable to recover it. 00:36:17.814 [2024-11-18 20:37:29.527497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.814 [2024-11-18 20:37:29.527561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.527847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.527909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.528188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.528250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.528469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.528531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.528737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.528799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.529013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.529077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.529361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.529423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.529695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.529759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.530042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.530103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.530329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.530392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.530664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.530728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.530996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.531057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.531329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.531390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.531667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.531730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.531958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.532021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.532294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.532356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.532579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.533460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.533964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.534038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.534312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.534374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.534579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.534659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.534882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.534944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.535178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.535250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.535493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.535557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.535795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.535858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.536118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.536179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.536407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.536468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.536743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.536806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.537045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.537105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.537386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.537447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.537685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.537749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.538033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.538094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.538368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.538430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.538679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.538742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.539018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.539080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.539358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.539425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.539712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.539776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.540002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.815 [2024-11-18 20:37:29.540065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.815 qpair failed and we were unable to recover it. 00:36:17.815 [2024-11-18 20:37:29.540296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.540358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.540632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.540713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.540954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.541015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.541252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.541313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.541525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.541586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.541903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.541964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.542231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.542293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.542529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.542590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.542876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.542937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.543172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.543236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.543474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.543536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.543830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.543893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.544173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.544234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.544518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.544579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.544809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.544871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.545144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.545205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.545440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.545501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.545781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.545844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.546081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.546142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.546424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.546485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.546709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.546771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.547000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.547062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.547293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.547354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.547647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.547709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.547961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.548032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.548308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.548369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.548634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.816 [2024-11-18 20:37:29.548708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.816 qpair failed and we were unable to recover it. 00:36:17.816 [2024-11-18 20:37:29.548982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.549043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.549314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.549374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.549607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.549698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.549927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.549988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.550228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.550289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.550475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.550537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.550800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.550863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.551143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.551204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.551412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.551474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.551756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.551818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.552049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.552111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.552372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.552434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.552666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.552730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.552916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.552980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.553230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.553293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.553560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.553620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.553933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.553995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.554230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.554292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.554531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.554594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.554857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.554920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.555195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.555256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.555499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.555562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.555853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.555916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.556147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.556208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.556445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.556507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.556778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.556841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.557090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.557153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.557372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.557433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.557712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.557774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.558048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.558109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.558357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.558419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.558661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.558725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.559017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.559079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.559331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.559393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.559585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.559660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.817 [2024-11-18 20:37:29.559906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.817 [2024-11-18 20:37:29.559968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.817 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.560250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.560312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.560547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.560617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.560909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.560971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.561195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.561257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.561525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.561586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.561826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.561889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.562159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.562220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.562486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.562548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.562795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.562858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.563088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.563151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.563430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.563492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.563710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.563774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.564042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.564102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.564334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.564397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.564626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.564701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.564947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.565009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.565240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.565303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.565562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.565624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.565905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.565966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.566235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.566296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.566566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.566627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.566881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.566942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.567208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.567269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.567533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.567594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.567849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.567911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.568149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.568210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.568391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.568454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.568736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.568799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.569041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.569102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.569331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.569392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.569633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.569708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.569981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.570043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.570284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.818 [2024-11-18 20:37:29.570345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.818 qpair failed and we were unable to recover it. 00:36:17.818 [2024-11-18 20:37:29.570573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.570649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.570934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.570995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.571213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.571274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.571471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.571531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.571727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.571789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.572035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.572096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.572309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.572369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.572602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.572683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.572972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.573049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.573327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.573388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.573620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.573698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.573968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.574031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.574269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.574331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.574599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.574678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.574915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.574976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.575249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.575310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.575538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.575599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.575804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.575866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.576051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.576114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.576395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.576457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.576740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.576802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.577078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.577139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.577382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.577445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.577690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.577752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.578001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.578062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.578332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.578395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.578667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.578729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.579007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.579067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.579241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.579305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.579578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.579655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.579932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.579994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.580273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.580335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.580570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.580631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.580878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.580940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.581135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.581196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.581448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.581509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.581747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.581810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.582030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.582091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.582372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.582434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.582664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.582729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.582956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.583017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.583279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.583341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.583618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.583691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.583975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.584036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.584286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.584347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.584616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.584691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.584926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.584989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.585262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.585323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.585601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.585707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.585964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.586025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.586241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.586302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.586547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.586608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.586826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.586887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.587069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.587131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.587369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.587431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.587651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.587714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.587964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.588025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.588251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.588314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.588587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.588664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.588897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.588959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.589193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.589257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.589466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.589527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.589835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.589898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.590184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.590245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.590517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.590579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.590881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.590944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.591222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.591283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.591519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.591581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.591847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.591911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.592126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.592190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.592464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.592526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.819 [2024-11-18 20:37:29.592809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.819 [2024-11-18 20:37:29.592873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.819 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.593070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.593131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.593397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.593458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.593691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.593753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.594005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.594067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.594297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.594360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.594595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.594670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.594941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.595002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.595236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.595298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.595527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.595590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.595890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.595952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.596143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.596205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.596484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.596545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.596792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.596828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.596976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.597011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.597156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.597191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.597327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.597362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.597564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.597604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.597765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.597800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.597931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.597965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.598132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.598166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.598301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.598334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.598457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.598491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.598613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.598655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.598833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.598865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.599023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.599055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.599188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.599220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.599377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.599410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.599566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.599598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.599736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.599769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.599927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.599960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.600096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.600128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.600248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.600281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.600407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.600439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.600575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.600608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.600757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.600790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.600944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.600975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.601132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.601164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.601314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.601376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.601684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.601716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.601841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.601873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.601966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.601998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.602089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.602121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.602290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.602356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.602587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.602620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.602767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.602799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.602926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.602956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.603101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.603154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.603325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.603384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.603512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.603542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.603664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.603695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.603826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.603856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.603976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.604006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.604109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.604140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.604225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.820 [2024-11-18 20:37:29.604255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.820 qpair failed and we were unable to recover it. 00:36:17.820 [2024-11-18 20:37:29.604376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.604410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.604536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.604567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.604689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.604721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.604827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.604860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.604985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.605016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.605171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.605202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.605356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.605387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.605517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.605548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.605682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.605714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.605821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.605852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.605974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.606010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.606154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.606189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.606383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.606417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.606529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.606565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.606750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.606781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.606909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.606959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.607130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.607167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.607313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.607349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.607467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.607503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.607669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.607701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.607855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.607886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.608022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.608056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.608161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.608198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.608308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.608342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.608462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.608511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.608658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.608708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.608813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.608844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.608960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.608996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.609147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.609182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.609370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.609439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.609687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.609719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.609821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.609853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.610006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.610036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.610134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.610182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.610292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.821 [2024-11-18 20:37:29.610327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.821 qpair failed and we were unable to recover it. 00:36:17.821 [2024-11-18 20:37:29.610466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.610502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.610677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.610709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.610838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.610869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.611024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.611060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.611176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.611212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.611388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.611424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.611529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.611564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.611699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.611731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.611833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.611864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.611992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.612023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.612126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.612157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.612298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.612333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.612437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.612472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.612643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.612695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.612819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.612851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.613006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.613042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.613185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.613221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.613422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.613457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.613560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.613596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.613755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.613788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.613894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.613942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.614089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.614121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.614246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.614277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.614430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.614464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.614569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.614604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.614758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.614791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.614884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.614915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.615097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.615132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.615300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.615335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.615471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.615521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.615666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.615716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.615852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.615882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.616031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.616066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.616242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.616273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.616375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.616410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.616536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.616567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.616693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.616742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.822 [2024-11-18 20:37:29.616861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-11-18 20:37:29.616892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.822 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.617028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.617077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.617244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.617279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.617411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.617446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.617664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.617712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.617818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.617849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.617967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.618002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.618175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.618210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.618379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.618414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.618515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.618550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.618689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.618721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.618829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.618860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.618959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.618990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.619086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.619117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.619268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.619303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.619446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.619481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.619665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.619697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.619831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.619862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.620008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.620043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.620146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.620180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.620326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.620361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.620493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.620526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.620690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.620720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.620820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.620852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.620974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.621022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.621179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.621229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.621356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.621405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.621532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.621563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.621681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.621712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.621817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.621848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.621943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.621974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.622099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.622129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.622236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.622268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.622402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.622433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.622565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.622595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.622713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.622745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.622880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.622913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.623045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.623076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.823 qpair failed and we were unable to recover it. 00:36:17.823 [2024-11-18 20:37:29.623207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-11-18 20:37:29.623239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.623340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.623371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.623507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.623539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.623667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.623698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.623818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.623867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.623979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.624028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.624145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.624194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.624316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.624345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.624446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.624476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.624573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.624603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.624701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.624732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.624820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.624849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.624976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.625007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.625115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.625145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.625266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.625300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.625421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.625452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.625557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.625587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.625700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.625731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.625853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.625883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.626013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.626043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.626197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.626227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.626325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.626354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.626447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.626477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.626586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.626616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.626722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.626752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.626849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.626879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.627016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.627046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.627151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.627181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.627280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.627310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.627440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.627470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.627565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.627595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.627728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.627759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.824 [2024-11-18 20:37:29.627888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-11-18 20:37:29.627918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.824 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.628048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.628077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.628201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.628231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.628386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.628416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.628533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.628562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.628668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.628699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.628800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.628831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.628957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.628987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.629112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.629147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.629239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.629269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.629359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.629388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.629487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.629517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.629617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.629656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.629759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.629789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.629881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.629912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.630011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.630041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.630132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.630162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.630260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.630290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.630378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.630408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.630501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.630531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.630623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.630660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.630752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.630782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.630912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.630942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.631033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.631063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.631159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.631189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.631280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.631310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.631441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.631471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.631574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.631604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.631710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.631741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.631844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.631874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.632004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.632034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.632162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.632192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.632326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.632356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.632481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.632511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.632645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.632676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.632778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.632813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.632958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.632989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.633141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.633171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.825 [2024-11-18 20:37:29.633264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.825 [2024-11-18 20:37:29.633294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.825 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.633388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.633418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.633516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.633547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.633680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.633712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.633807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.633837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.633936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.633966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.634063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.634093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.634191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.634220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.634353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.634383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.634476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.634506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.634589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.634619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.634764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.634794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.634963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.634993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.635116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.635146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.635240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.635271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.635363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.635393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.635515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.635545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.635644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.635675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.635766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.635796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.635886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.635916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.636018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.636048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.636164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.636194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.636286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.636316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.636439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.636469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.636570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.636604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.636697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.636728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.636835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.636865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.636968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.636997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.637129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.637159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.637280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.637310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.637404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.637434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.637529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.637559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.637689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.637721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.637807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.637837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.637930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.637960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.826 [2024-11-18 20:37:29.638064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.826 [2024-11-18 20:37:29.638094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.826 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.638186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.638215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.638348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.638378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.638491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.638522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.638656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.638687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.638790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.638820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.638929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.638959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.639096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.639127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.639248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.639278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.639404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.639433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.639560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.639590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.639729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.639760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.639884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.639915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.640016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.640046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.640182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.640212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.640338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.640368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.640498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.640529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.640630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.640667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.640773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.640803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.640900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.640930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.641030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.641060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.641152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.641182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.641311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.641341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.641473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.641504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.641632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.641670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.641796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.641826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.641921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.641951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.642072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.642102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.642217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.642247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.642383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.642413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.642543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.642573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.642692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.642727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.642853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.642887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.643064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.643094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.643203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.643233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.643333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.643363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.643485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.643515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.643670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.643701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.643813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.827 [2024-11-18 20:37:29.643843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.827 qpair failed and we were unable to recover it. 00:36:17.827 [2024-11-18 20:37:29.643975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.644006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.644130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.644161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.644314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.644344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.644442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.644472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.644629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.644665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.644774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.644804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.644937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.644968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.645062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.645092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.645223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.645253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.645344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.645374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.645464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.645495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.645588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.645618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.645733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.645763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.645861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.645891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.646047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.646078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.646203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.646233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.646336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.646366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.646459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.646490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.646581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.646618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.646727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.646757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.646859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.646889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.647012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.647042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.647174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.647203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.647357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.647388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.647517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.647547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.647651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.647681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.647764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.647794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.647902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.647932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.648060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.648090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.648177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.648207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.648336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.648367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.648480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.648511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.648618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.648668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.648762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.648793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.648897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.648927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.649066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.649097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.649230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.649260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.649387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.828 [2024-11-18 20:37:29.649417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.828 qpair failed and we were unable to recover it. 00:36:17.828 [2024-11-18 20:37:29.649549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.649581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.649704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.649736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.649835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.649867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.649954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.649984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.650082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.650112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.650233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.650272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.650381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.650412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.650537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.650593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.650721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.650755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.650858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.650890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.650993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.651025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.651154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.651185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.651310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.651341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.651497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.651528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.651612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.651649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.651755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.651786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.651868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.651908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.652060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.652096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.652281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.652317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.652432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.652466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.652604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.652650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.652787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.652817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.652923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.652954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.653055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.653086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.653239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.653290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.653435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.653469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.653590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.653620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.829 [2024-11-18 20:37:29.653727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.829 [2024-11-18 20:37:29.653759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.829 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.653863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.653893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.653987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.654018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.654130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.654165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.654296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.654331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.654436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.654470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.654608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.654653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.654798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.654846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.654973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.655023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.655175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.655226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.655371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.655421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.655552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.655584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.655704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.655735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.655837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.655867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.655970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.656002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.656106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.656138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.656227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.656257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.656378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.656409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.656543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.656574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.656674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.656709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.656848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.656880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.657016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.657047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.657158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.657192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.657368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.657399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.657527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.657558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.657696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.657731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.657841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.657873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.657975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.658005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.658132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.658163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.658299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.658330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.658452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.658483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.658585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.658616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.658715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.658746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.658880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.658910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.659013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.659050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.659149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.659180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.659276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.659307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.659407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.659437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.659523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.659553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.659688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.659719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.659810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.659841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.659995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.660046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.660158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.660192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.660331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.660363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.660455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.660486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.660595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.660626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.660733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.660764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.660859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.660889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.661020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.661051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.661224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.661259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.661389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.830 [2024-11-18 20:37:29.661424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.830 qpair failed and we were unable to recover it. 00:36:17.830 [2024-11-18 20:37:29.661539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.661575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.661704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.661736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.661840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.661871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.661999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.662030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.662191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.662222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.662354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.662386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.662519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.662551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.662611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167f970 (9): Bad file descriptor 00:36:17.831 [2024-11-18 20:37:29.662790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.662827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.662967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.663004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.663154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.663189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.663328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.663377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.663489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.663523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.663707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.663738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.663832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.663863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.663995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.664026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.664149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.664183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.664357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.664391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.664560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.664594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.664723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.664754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.664862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.664892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.665030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.665078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.665187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.665221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.665365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.665399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.665518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.665553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.665703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.665733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.665830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.665860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.665959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.665989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.666110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.666144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.666287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.666321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.666488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.666523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.666666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.666716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.666845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.666890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.667053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.667110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.667252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.667300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.667441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.667491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.667634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.667675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.667783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.667821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.667913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.667944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.668070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.668105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.668242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.668272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.668395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.668426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.668523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.668554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.668692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.668725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.668821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.668851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.668986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.669016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.669141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.669177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.669301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.669334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.669438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.669468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.669610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.669665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.669779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.669812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.669930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.669962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.670093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.670124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.670255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.670286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.670391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.670424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.670563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.670595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.670716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.670749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.670874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.670923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.671047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.671097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.671217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.671266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.671388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.671419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.671527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.671561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.671689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.671723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.671822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.671854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.672011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.672052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.672224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.672258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.672357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.672392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.672505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.672536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.672665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.672712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.672815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.672847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.673036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.673072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.673210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.673245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.673384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.831 [2024-11-18 20:37:29.673419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.831 qpair failed and we were unable to recover it. 00:36:17.831 [2024-11-18 20:37:29.673568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.673599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.673708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.673739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.673838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.673869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.674006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.674042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.674223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.674258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.674397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.674433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.674577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.674612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.674755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.674788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.674888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.674919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.675033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.675067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.675174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.675208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.675348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.675382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.675489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.675524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.675663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.675712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.675813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.675843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.675998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.676033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.676177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.676211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.676325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.676359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.676512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.676547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.676658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.676709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.676817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.676847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.676940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.676971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.677123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.677155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.677341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.677376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.677517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.677552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.677668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.677719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.677841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.677872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.678027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.678059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.678204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.678240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.678394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.678428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.678542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.678592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.678726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.678763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.678873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.678904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.679055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.679089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.679197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.679233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.679354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.679389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.679614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.679659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.679783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.679814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.679921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.679952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.680100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.680136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.680307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.680341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.680446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.680482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.680623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.680678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.680790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.680826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.680945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.680976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.681107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.681155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.681312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.681361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.681493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.681523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.681653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.681686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.681788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.681818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.681917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.681948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.682064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.682100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.682244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.682279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.682383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.682418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.682541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.682571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.682700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.682732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.682838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.682869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.682964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.682995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.683099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.683131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.683230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.683260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.683443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.683477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.683653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.683704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.683814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.683844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.684002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.684032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.832 qpair failed and we were unable to recover it. 00:36:17.832 [2024-11-18 20:37:29.684135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.832 [2024-11-18 20:37:29.684165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.684323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.684358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.684554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.684588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.684730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.684761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.684862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.684893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.685019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.685051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.685212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.685242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.685386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.685456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.685623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.685668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.685776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.685807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.685907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.685938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.686040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.686070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.686159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.686190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.686309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.686345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.686576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.686611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.686731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.686761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.686876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.686906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.687036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.687066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.687196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.687247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.687419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.687454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.687572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.687604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.687727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.687758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.687860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.687892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.688020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.688051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.688177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.688207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.688331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.688362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.688502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.688537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.688654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.688702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.688803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.688833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.688935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.688965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.689090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.689121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.689297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.689331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.689530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.689564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.689702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.689733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.689855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.689903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.690040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.690072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.690210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.690242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.690438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.690472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.690617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.690681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.690833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.690868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.691025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.691060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.691197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.691231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.691374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.691408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.691535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.691573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.691699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.691734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.691850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.691885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.692005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.692042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.692208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.692249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.692391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.692425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.692578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.692613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.692736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.692770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.692886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.692920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.693049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.693085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.693240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.693274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.693391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.693425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.693567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.693604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.693735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.693770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.693886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.693920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.694022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.694057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.694228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.694263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.694413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.694447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.694601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.694655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.694769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.694803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.694908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.694943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.695083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.695119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.695259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.695294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.695409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.695446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.695558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.695596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.695780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.695817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.695966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.696001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.696102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.696138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.696273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.696307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.833 [2024-11-18 20:37:29.696475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.833 [2024-11-18 20:37:29.696509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.833 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.696620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.696662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.696778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.696813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.696923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.696957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.697096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.697131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.697270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.697304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.697449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.697484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.697585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.697619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.697745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.697779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.697890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.697924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.698025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.698060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.698201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.698235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.698384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.698419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.698578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.698613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.698728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.698762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.698871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.698912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.699053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.699088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.699225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.699259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.699394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.699429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.699572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.699605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.699734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.699770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.699875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.699910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.700875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.700911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.701073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.701104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.701906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.701941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.702074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.702104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.702197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.702227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.702318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.702349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.702438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.702467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.702623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.702662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.702750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.702780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.702875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.702905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.703019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.703049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.703175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.703205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.703296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.703325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.703450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.703479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.703577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.703607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.703704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.703734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.703817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.703846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.703943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.703973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.704092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.704122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.704253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.704282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.704408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.704438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.704560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.704589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.704697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.704727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.704823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.704853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.704976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.705006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.705102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.705131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.705931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.705964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.706115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.706144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.706243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.706272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.706358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.706386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.706509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.706537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.706627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.706665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.706786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.706814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.706927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.706962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.707107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.707135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.707264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.707292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.707383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.707411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.707502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.707531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.707625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.707662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.707753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.707781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.707905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.707934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.708054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.708082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.708168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.708198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.708350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.708378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.708474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.708503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.834 qpair failed and we were unable to recover it. 00:36:17.834 [2024-11-18 20:37:29.708591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.834 [2024-11-18 20:37:29.708619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.708718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.708747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.708855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.708884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.708967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.708995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.709086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.709115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.709238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.709267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.709377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.709406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.709496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.709525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.709615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.709653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.709750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.709778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.709897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.709926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.710018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.710047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.710163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.710193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.710291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.710320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.710415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.710444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.710539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.710567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.710660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.710695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.710816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.710845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.710992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.711020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.711106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.711134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.711253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.711280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.711372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.711400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.711521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.711549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.711677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.711705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.711802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.711831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.711950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.711978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.712100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.712128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.712247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.712275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.712421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.712453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.712601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.712629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.712755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.712783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.712865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.712892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.713048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.713075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.713223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.713252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.713377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.713405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.713482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.713510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.713610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.713645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.713757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.713805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.713948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.713977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.714099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.714129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.714279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.714307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.714424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.714453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.714557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.714584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.714683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.714711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.714826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.714854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.714946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.714973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.715116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.715143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.715263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.715291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.715415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.715443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.715561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.715589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.715691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.715720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.715808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.715837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.715927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.715956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.716084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.716111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.716265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.716294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.716433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.716475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.716573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.716615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.716773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.716824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.717052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.717083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.717222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.717268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.717418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.717446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.717563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.717592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.717710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.717738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.717861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.717890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.718044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.718073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.718162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.718190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.718307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.718335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.718480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.718509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.718602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.718630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.718741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.835 [2024-11-18 20:37:29.718768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.835 qpair failed and we were unable to recover it. 00:36:17.835 [2024-11-18 20:37:29.718851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.718879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.718973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.719002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.719119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.719146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.719230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.719258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.719375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.719403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.719527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.719555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.719676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.719705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.719806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.719833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.719960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.719988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.720113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.720140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.720217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.720245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.720362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.720390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.720472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.720500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.720594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.720622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.720725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.720754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.720848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.720876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.720962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.720990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.721094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.721122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.721210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.721239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.721330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.721359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.721483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.721511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.721604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.721632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.721736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.721764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.721851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.721879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.722002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.722031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.722146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.722180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.722300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.722329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.722410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.722437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.722536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.722564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.722671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.722700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.722801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.722830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.722920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.722947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.723066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.723094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.723218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.723245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.723335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.723363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.723459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.723486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.723567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.723594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.723733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.723761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.723856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.723883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.724009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.724036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.724129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.724156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.724251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.724279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.724362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.724391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.724483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.724519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.724660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.724688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.724773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.724801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.724888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.724916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.725015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.725051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.725217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.725253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.725368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.725405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.725528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.725564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.725706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.725743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.725844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.725875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.725987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.726018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.726116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.726146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.726237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.726265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.726354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.726382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.726470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.726499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.726589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.726618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.726776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.726811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.726919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.726954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.727089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.727125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.727352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.727391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.727569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.727599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.727700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.727730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.727830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.727870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.728085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.728146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.728427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.728498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.728730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.728760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.728874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.728909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.729062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.729097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.729235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.729269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.836 qpair failed and we were unable to recover it. 00:36:17.836 [2024-11-18 20:37:29.729389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.836 [2024-11-18 20:37:29.729422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.729550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.729578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.729663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.729692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.729806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.729840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.729974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.730002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.730101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.730128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.730221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.730248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.730334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.730362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.730476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.730504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.730622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.730659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.730785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.730812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.730906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.730935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.731058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.731085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.731201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.731231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.731377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.731405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.731491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.731519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.731607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.731634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.731745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.731773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.731857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.731884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.732035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.732063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.732212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.732245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.732333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.732360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.732485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.732513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.732598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.732626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.732739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.732768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.732850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.732877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.732996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.733024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.733143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.733170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.733261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.733290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.733436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.733463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.733590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.733617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.733714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.733742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.733828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.733856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.734001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.734030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.734114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.734142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.734250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.734278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.734430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.734457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.734575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.734603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.734709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.734737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.734818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.734846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.734972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.735000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.735096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.735123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.837 [2024-11-18 20:37:29.735212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.837 [2024-11-18 20:37:29.735239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.837 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.735338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.735367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.735461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.735489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.735571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.735600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.735699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.735727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.735828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.735858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.735976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.736004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.736118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.736146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.736246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.736273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.736370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.736398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.736497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.736525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.736652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.736682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.736765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.736792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.736916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.736944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.737063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.737090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.737237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.737265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.737387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.737415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.737524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.737552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.737683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.737732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.737843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.737873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.738018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.738047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.738143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.738172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.738259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.738289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.738403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.738431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.738546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.738574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.738664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.738693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.738794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.738822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.738936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.738970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.739142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.739176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.739372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.739423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.739622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.739662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.739761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.739789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.739944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.739993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.740121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.740169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.740339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.740387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.740477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.740505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.740604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.740633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.740748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.740777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.838 [2024-11-18 20:37:29.740890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.838 [2024-11-18 20:37:29.740935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.838 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.741036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.741070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.741204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.741238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.741365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.741399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.741515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.741542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.741646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.741676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.741800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.741828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.741925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.741979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.742196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.742230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.742373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.742431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.742603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.742631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.742793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.742821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.743004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.743032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.743239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.743272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.743391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.743439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.743652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.743706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.743788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.743815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.743933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.743961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.744081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.744108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.744230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.744264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.744382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.744416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.744592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.744625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.744746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.744774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.744889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.744916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.745044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.745077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.745194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.745239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.745386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.745420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.745551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.745584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.745720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.745750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.745862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.745890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.746035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.746095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.746201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.746229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.746353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.746386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.746550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.746583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.746742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.746791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.746890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.746919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.747033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.747081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.747188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.747222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.747376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.747424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.747548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.747576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.747675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.747705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.747833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.747861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.747985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.748012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.748133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.748163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.748259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.748287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.748383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.748410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.839 [2024-11-18 20:37:29.748541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.839 [2024-11-18 20:37:29.748571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.839 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.748697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.748726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.748817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.748845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.748942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.748976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.749140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.749168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.749307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.749353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.749469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.749497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.749598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.749625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.749788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.749816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.749936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.749964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.750109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.750136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.750224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.750254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.750411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.750438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.750564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.750592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.750726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.750755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.750878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.750911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.751000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.751029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.751171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.751220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.751328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.751355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.751479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.751507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.751630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.751668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.751749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.751776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.751926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.751954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.752104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.752132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.752223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.752250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.752342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.752370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.752469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.752496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.752610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.752646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.752794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.752822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.752932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.752966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.753107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.753136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.753224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.753252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.753401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.753428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.753518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.753545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.753646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.753674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.753797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.753826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.753968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.754015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.754098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.754126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.754265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.754294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.754416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.754446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.754561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.754589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.754745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.754774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.754902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.754950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.755128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.755175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.755292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.755320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.755403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.755431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.755520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.755549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.755658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.755688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.755808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.755836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.755968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.755996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.756108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.756136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.756233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.756261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.756381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.756410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.756503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.756530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.756691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.756735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.756859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.756894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.757018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.757046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.757137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.757165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.757288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.757316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.757430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.757458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.757577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.757605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.757734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.757764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.757843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.757871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.758012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.758045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.758186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.758220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.758356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.758388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.840 qpair failed and we were unable to recover it. 00:36:17.840 [2024-11-18 20:37:29.758527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.840 [2024-11-18 20:37:29.758556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.758701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.758730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.758850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.758879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.759015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.759046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.759211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.759242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.759367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.759398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.759547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.759590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.759720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.759751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.759877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.759906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.760000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.760029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.760149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.760195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.760309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.760338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.760487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.760516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.760607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.760656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.760755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.760783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.760876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.760904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.760994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.761026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.761142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.761188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.761288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.761319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.761527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.761558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.761729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.761757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.761955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.761985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.762111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.762141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.762258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.762288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.762497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.762527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.762651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.762697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.762796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.762824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.762955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.762985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.763079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.763109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.763209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.763239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.763373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.763406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.763571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.763600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.763723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.763752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.763893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.763942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.764086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.764131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.764270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.764316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.764461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.764490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.764577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.764605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.764752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.841 [2024-11-18 20:37:29.764799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.841 qpair failed and we were unable to recover it. 00:36:17.841 [2024-11-18 20:37:29.764889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.764918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.765084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.765129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.765280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.765308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.765393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.765422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.765544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.765577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.765695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.765726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.765848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.765876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.766075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.766104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.766192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.766222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.766311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.766340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.766485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.766514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.766593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.766622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.766778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.766808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.766942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.766988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.767126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.767172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.767303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.767347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.767441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.767470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.767563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.767591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.767692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.767722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.767871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.767899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.768020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.768049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.768161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.768189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.768312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.768340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.768463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.768492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.768620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.768656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.768773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.768801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.768896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.768924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.769043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.769072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.769158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.769186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.769300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.769328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.769421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.769450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.769598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.769627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.769757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.769785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.769894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.769923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.770004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.770032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.770148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.770176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.770261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.770290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.770382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.770410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.770528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.770556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.770659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.770689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.770808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.770836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.770957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.770985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.771075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.771103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.771229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.771257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.771378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.771411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.771527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.771555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.771647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.842 [2024-11-18 20:37:29.771692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.842 qpair failed and we were unable to recover it. 00:36:17.842 [2024-11-18 20:37:29.771779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.771808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.771921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.771948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.772077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.772104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.772216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.772243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.772342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.772368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.772449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.772475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.772589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.772617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.772705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.772733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.772835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.772863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.772954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.772981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.773091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.773119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.773199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.773225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.773332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.773360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.773485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.773527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.773616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.773655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.773805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.773833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.773956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.773983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.774098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.774124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.774242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.774269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.774386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.774414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.774559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.774586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.774699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.774728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.774821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.774848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.774999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.775025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.775110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.775140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.775289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.775317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.775412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.775439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.775533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.775560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.775751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.775779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.775897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.775923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.776042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.776069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.776215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.776243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.776358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.776383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.776501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.776526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.776657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.776684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.776763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.776789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.776878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.776905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.776991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.777017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.777141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.777167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.777252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.777279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.777420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.777447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.777563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.777590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.777706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.777733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.777823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.777849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.777964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.777990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.778102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.778128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.843 [2024-11-18 20:37:29.778216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.843 [2024-11-18 20:37:29.778243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.843 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.778330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.778356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.778494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.778520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.778649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.778676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.778795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.778822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.778937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.778964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.779044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.779069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.779188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.779213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.779308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.779334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.779423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.779451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.779546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.779572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.779665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.779692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.779887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.779913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.780052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.780078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.780219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.780245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.780328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.780356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.780441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.780466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.780559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.780585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.780663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.780689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.780782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.780809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.780897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.780923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.781038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.781064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.781176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.781202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.781315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.781341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.781460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.781485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.781590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.781616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.781713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.781739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.781822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.781849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.781961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.781987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.782104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.782131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.782222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.782247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.782361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.782388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.782482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.782508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.782612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.782660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.782806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.782834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.782953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.782979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.783086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.783113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.783194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.783220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.783327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.783354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.783454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.783482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.783593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.783618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.783786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.783826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.783949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.844 [2024-11-18 20:37:29.783978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.844 qpair failed and we were unable to recover it. 00:36:17.844 [2024-11-18 20:37:29.784070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.784098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.784240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.784267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.784378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.784411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.784528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.784555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.784633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.784668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.784865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.784892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.785006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.785033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.785148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.785175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.785288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.785314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.785456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.785482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.785567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.785593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.785708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.785735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.785810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.785836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.785921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.785948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.786066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.786092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.786182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.786208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.786296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.786322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.786428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.786454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.786540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.786566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.786710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.786737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.786863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.786889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.786976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.787003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.787112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.787138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.787219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.787245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.787336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.787363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.787454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.787481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.787631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.787678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.787776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.787803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.787953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.787979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.788062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.788096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.788185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.788212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.788349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.788375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.788493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.788519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.788608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.788634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.788729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.788756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.788845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.788871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.789015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.789041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.789169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.789210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.789331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.789359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.789479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.789507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.789593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.789622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.789761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.789788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.789894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.789921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.790073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.790101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.790302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.790357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.790473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.790500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.845 [2024-11-18 20:37:29.790590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.845 [2024-11-18 20:37:29.790618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.845 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.790737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.790765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.790853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.790879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.791065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.791117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.791198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.791225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.791365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.791394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.791543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.791571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.791707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.791733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.791818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.791845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.791955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.791980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.792099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.792125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.792198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.792224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.792328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.792356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.792444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.792471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.792580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.792608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.792718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.792744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.792829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.792855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.792994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.793021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.793142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.793170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.793335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.793378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.793541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.793572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.793669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.793714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.793805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.793832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.793920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.793973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.794095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.794125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.794309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.794365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.794683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.794712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.794826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.794853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.794933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.794960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.795143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.795202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:17.846 [2024-11-18 20:37:29.795423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.846 [2024-11-18 20:37:29.795477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:17.846 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.795662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.795689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.795831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.795859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.796021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.796050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.796232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.796293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.796563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.796628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.796801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.796830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.796974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.797019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.797136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.797164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.797322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.797385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.797625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.797660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.797748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.797777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.797892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.797921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.798147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.798210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.798414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.798474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.798730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.798759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.798845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.798872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.798957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.798984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.799080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.799107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.799284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.799336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.799589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.799621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.799766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.131 [2024-11-18 20:37:29.799793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-11-18 20:37:29.799929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.799959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.800089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.800117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.800342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.800396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.800593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.800622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.800781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.800809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.800958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.800989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.801203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.801268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.801510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.801564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.801775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.801804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.801912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.801940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.802033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.802062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.802173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.802201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.802329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.802361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.802460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.802494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.802634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.802692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.802786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.802812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.802936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.802964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.803076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.803102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.803219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.803247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.803423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.803483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.803703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.803732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.803863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.803905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.804025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.804088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.804229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.804300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.804419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.804449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.804584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.804609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.804726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.804756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.804885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.804915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.805011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.805040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.805159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.805188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.805305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.805334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.805482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.805524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.805648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.805675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.805833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.805862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.805983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.806011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.806128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.806158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.806351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.806406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.806698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.132 [2024-11-18 20:37:29.806728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.132 qpair failed and we were unable to recover it. 00:36:18.132 [2024-11-18 20:37:29.806825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.806858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.806981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.807010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.807128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.807158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.807283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.807312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.807496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.807573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.807781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.807811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.807980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.808008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.808151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.808179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.808340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.808401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.808671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.808700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.808839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.808866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.808992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.809020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.809138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.809167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.809476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.809503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.809622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.809657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.809750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.809777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.809872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.809899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.810061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.810088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.810255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.810316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.810592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.810692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.810787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.810817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.810963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.811006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.811085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.811112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.811228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.811256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.811364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.811394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.811684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.811714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.811839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.811868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.811987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.812015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.812108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.812151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.812268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.812295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.812471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.812529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.812710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.812739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.812861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.812890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.813037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.813065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.813174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.813202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.813421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.813482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.813731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.813760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.133 qpair failed and we were unable to recover it. 00:36:18.133 [2024-11-18 20:37:29.813878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.133 [2024-11-18 20:37:29.813907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.813995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.814024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.814116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.814191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.814439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.814516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.814768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.814797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.814892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.814921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.815087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.815117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.815238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.815268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.815371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.815400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.815567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.815595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.815713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.815742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.815909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.815938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.816080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.816107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.816222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.816250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.816386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.816415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.816537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.816601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.816760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.816790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.816924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.816953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.817082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.817128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.817268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.817302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.817409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.817442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.817594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.817628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.817755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.817786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.817909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.817953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.818066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.818093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.818204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.818268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.818551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.818611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.818822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.818852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.819014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.819075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.819250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.819314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.819443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.819471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.134 [2024-11-18 20:37:29.819679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.134 [2024-11-18 20:37:29.819708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.134 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.819802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.819829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.819947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.819974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.820106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.820141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.820309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.820345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.820452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.820486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.820630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.820688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.820785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.820815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.820923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.820952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.821090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.821125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.821292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.821326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.821467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.821501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.821646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.821698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.821806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.821835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.821957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.821987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.822136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.822184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.822333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.822365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.822501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.822534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.822666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.822714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.822822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.822850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.822947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.822977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.823133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.823166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.823322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.823351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.823506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.823538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.823684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.823715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.823814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.823844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.823972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.824002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.824150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.824184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.824318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.824350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.824478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.824511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.824618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.824663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.824806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.824835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.824947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.824977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.825128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.825160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.825275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.825320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.825493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.825526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.825705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.825735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.825836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.825865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.135 [2024-11-18 20:37:29.826018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.135 [2024-11-18 20:37:29.826065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.135 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.826172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.826204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.826337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.826370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.826502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.826534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.826707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.826737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.826858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.826888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.827009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.827038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.827165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.827198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.827324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.827356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.827478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.827509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.827643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.827675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.827817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.827846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.828010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.828041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.828141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.828170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.828262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.828298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.828434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.828480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.828664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.828694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.828791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.828821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.828943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.828973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.829099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.829129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.829256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.829287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.829385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.829415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.829507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.829537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.829644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.829691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.829781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.829811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.829928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.829960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.830060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.830090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.830215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.830247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.830382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.830413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.830516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.830546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.830633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.830686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.830779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.830807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.830899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.830929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.831053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.831083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.831194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.831226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.831324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.831355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.831479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.831509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.136 qpair failed and we were unable to recover it. 00:36:18.136 [2024-11-18 20:37:29.831608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.136 [2024-11-18 20:37:29.831669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.831766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.831796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.831919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.831948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.832079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.832110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.832269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.832298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.832424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.832453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.832582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.832611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.832741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.832770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.832863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.832894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.833046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.833074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.833182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.833212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.833332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.833361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.833481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.833510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.833645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.833676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.833793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.833822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.833949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.833979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.834106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.834135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.834286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.834321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.834427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.834456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.834551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.834580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.834716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.834746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.834863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.834892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.835010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.835039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.835134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.835163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.835312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.835341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.835465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.835492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.835614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.835647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.835766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.835795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.835888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.835916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.836005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.836034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.836157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.836186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.836306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.836334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.836454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.836483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.836629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.836663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.836757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.836785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.836882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.836910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.837057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.837085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.837202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.137 [2024-11-18 20:37:29.837231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.137 qpair failed and we were unable to recover it. 00:36:18.137 [2024-11-18 20:37:29.837328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.837357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.837447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.837477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.837592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.837621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.837744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.837773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.837867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.837895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.838019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.838048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.838201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.838229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.838346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.838374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.838496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.838526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.838649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.838693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.838815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.838841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.838956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.838983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.839075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.839102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.839222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.839250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.839333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.839360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.839474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.839502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.839613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.839647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.839741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.839771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.839868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.839895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.840015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.840047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.840170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.840197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.840273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.840300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.840441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.840468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.840585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.840612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.840714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.840742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.840835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.840863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.841004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.841031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.841157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.841184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.841298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.841327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.841416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.841443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.841582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.841609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.841711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.841738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.841857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.841884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.842026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.842053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.842196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.842222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.842335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.842362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.842448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.842474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.138 [2024-11-18 20:37:29.842610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.138 [2024-11-18 20:37:29.842658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.138 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.842771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.842797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.842921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.842948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.843034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.843061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.843204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.843230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.843338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.843365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.843474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.843500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.843606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.843632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.843758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.843784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.843876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.843903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.843983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.844009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.844145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.844172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.844256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.844283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.844398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.844426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.844538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.844564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.844674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.844702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.844790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.844817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.844963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.844990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.845074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.845100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.845206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.845233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.845322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.845348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.845434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.845462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.845553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.845587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.845727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.845755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.845870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.845896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.846035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.846061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.846146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.846173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.846258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.846285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.846402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.846429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.846544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.846571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.846684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.846711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.846801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.846828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.139 qpair failed and we were unable to recover it. 00:36:18.139 [2024-11-18 20:37:29.846908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.139 [2024-11-18 20:37:29.846935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.847046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.847074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.847188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.847214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.847358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.847385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.847534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.847561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.847675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.847702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.847816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.847843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.847951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.847977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.848068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.848095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.848208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.848234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.848313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.848340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.848428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.848454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.848569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.848596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.848721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.848748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.848831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.848859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.848970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.848996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.849077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.849104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.849255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.849281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.849392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.849419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.849498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.849524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.849602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.849628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.849747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.849774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.849918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.849946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.850058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.850085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.850197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.850224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.850311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.850337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.850464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.850491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.850632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.850672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.850764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.850792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.850877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.850904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.851047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.851078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.851196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.851223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.851306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.851332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.851475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.851501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.851619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.851654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.851746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.851773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.851924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.851952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.140 [2024-11-18 20:37:29.852064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.140 [2024-11-18 20:37:29.852090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.140 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.852228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.852255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.852341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.852367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.852482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.852509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.852592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.852618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.852744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.852770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.852879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.852905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.853024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.853052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.853168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.853196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.853312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.853339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.853429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.853455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.853593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.853619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.853744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.853771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.853890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.853916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.853992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.854019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.854130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.854156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.854279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.854307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.854448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.854475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.854567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.854593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.854713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.854741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.854872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.854912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.855010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.855038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.855148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.855174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.855313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.855341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.855453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.855478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.855596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.855623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.855782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.855811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.855926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.855953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.856039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.856066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.856153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.856182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.856292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.856318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.856459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.856487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.856576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.856604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.856728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.856761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.856872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.856898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.857009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.857036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.857147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.857172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.857284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.857310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.141 [2024-11-18 20:37:29.857389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.141 [2024-11-18 20:37:29.857414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.141 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.857600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.857660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.857771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.857797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.857916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.857965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.858073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.858107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.858322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.858382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.858627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.858662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.858756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.858782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.858862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.858889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.859159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.859193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.859332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.859365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.859537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.859581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.859756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.859786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.859902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.859928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.860003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.860030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.860142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.860169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.860307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.860341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.860519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.860554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.860708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.860736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.860848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.860874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.861083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.861111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.861197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.861224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.861310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.861367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.861566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.861594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.861714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.861741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.861848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.861874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.861980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.862040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.862218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.862265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.862454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.862499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.862681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.862734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.862851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.862878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.863015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.863041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.863204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.863253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.863431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.863475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.863659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.863728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.863813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.863845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.864074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.864141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.864292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.864318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.142 qpair failed and we were unable to recover it. 00:36:18.142 [2024-11-18 20:37:29.864498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.142 [2024-11-18 20:37:29.864543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.864712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.864739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.864880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.864907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.865158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.865211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.865393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.865439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.865629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.865698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.865818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.865844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.865960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.865986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.866175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.866221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.866357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.866404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.866631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.866710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.866826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.866854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.867001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.867028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.867137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.867163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.867288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.867340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.867494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.867541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.867742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.867781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.867904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.867932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.868019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.868045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.868133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.868188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.868363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.868388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.868598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.868631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.868774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.868800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.868953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.869013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.869240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.869274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.869477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.869537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.869781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.869808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.869919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.869945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.870128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.870180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.870378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.870456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.870699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.870725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.870836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.870862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.870979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.871041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.871250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.871319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.871583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.143 [2024-11-18 20:37:29.871610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.143 qpair failed and we were unable to recover it. 00:36:18.143 [2024-11-18 20:37:29.871732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.871758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.871848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.871874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.871961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.871991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.872128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.872154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.872328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.872375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.872662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.872728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.872828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.872856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.872971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.872998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.873079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.873136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.873379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.873451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.873651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.873696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.873784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.873810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.873958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.874026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.874241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.874267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.874469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.874518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.874714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.874742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.874889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.874916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.875120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.875179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.875409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.875436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.875688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.875715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.875828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.875854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.875936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.875962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.876075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.876148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.876369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.876396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.876525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.876552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.876633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.876666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.876773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.876800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.876883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.876911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.877034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.877062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.877151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.877179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.877263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.877291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.877411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.877468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.877632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.877700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.877819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.877847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.877959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.877987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.878117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.144 [2024-11-18 20:37:29.878194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-11-18 20:37:29.878473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.878533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.878764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.878792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.878908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.878936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.879077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.879148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.879359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.879419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.879596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.879654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.879788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.879821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.879931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.879958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.880190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.880256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.880525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.880553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.880647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.880675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.880763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.880821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.881027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.881054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.881168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.881196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.881421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.881478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.881574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.881602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.881695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.881723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.881807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.881835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.881928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.881956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.882038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.882066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.882223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.882283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.882464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.882524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.882761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.882809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.882987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.883059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.883275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.883303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.883411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.883439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.883524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.883552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.883629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.883661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.883804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.883831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.883917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.883945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.884047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.884107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.884223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.884251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.884511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.884598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.884819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.884872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.885084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.885151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.885414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.885482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.885701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.885760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.885897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.145 [2024-11-18 20:37:29.885927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-11-18 20:37:29.886050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.886079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.886203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.886234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.886341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.886374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.886534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.886568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.886743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.886773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.886873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.886903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.887043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.887075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.887235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.887283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.887469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.887515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.887664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.887719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.887851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.887884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.888052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.888080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.888211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.888241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.888438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.888487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.888664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.888715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.888888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.888918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.889035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.889064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.889223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.889294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.889512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 404469 Killed "${NVMF_APP[@]}" "$@" 00:36:18.146 [2024-11-18 20:37:29.889560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.889749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.889783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.889918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.889983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 20:37:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 20:37:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:18.146 [2024-11-18 20:37:29.890211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.890259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 20:37:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:18.146 [2024-11-18 20:37:29.890408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.890455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 20:37:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 20:37:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.146 [2024-11-18 20:37:29.890653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.890705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.890865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.890896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.891031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.891078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.891267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.891316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.891505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.891553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.891738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.891772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.891882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.891913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.892136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.892184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.892395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.892443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.892630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.892687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.892862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.892893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.893013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.146 [2024-11-18 20:37:29.893043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-11-18 20:37:29.893182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.893211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.893330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.893360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.893490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.893520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.893650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.893680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.893802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.893832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.893956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.893986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.894121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.894151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.894280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.894310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.894498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.894527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.894722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.894755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.894847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.894876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.895032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.895062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.895203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.895233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 20:37:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=405018 00:36:18.147 20:37:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:18.147 [2024-11-18 20:37:29.895451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.895500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 20:37:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 405018 00:36:18.147 [2024-11-18 20:37:29.895667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.895723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 20:37:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 405018 ']' 00:36:18.147 [2024-11-18 20:37:29.895845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.895874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 20:37:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.147 [2024-11-18 20:37:29.895996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.896026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 20:37:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:18.147 [2024-11-18 20:37:29.896159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.896222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 20:37:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.147 [2024-11-18 20:37:29.896417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.896465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 20:37:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:18.147 [2024-11-18 20:37:29.896654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.896709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 20:37:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.147 [2024-11-18 20:37:29.896834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.896862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.896975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.897008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.897206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.897252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.897432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.897483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.897687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.897715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.897805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.897831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.901653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.901688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.901845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.901876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.902013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.902042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.902160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.902188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.902311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.902341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.902459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.902492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.902608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.147 [2024-11-18 20:37:29.902650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.147 qpair failed and we were unable to recover it. 00:36:18.147 [2024-11-18 20:37:29.902809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.902839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.902940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.902967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.903099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.903128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.903247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.903277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.903370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.903395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.903513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.903541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.903687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.903716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.903839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.903867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.903989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.904017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.904126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.904154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.904282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.904308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.904466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.904494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.904621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.904655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.904772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.904812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.904971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.905009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.905128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.905157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.905302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.905339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.905459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.905486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.905603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.905630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.905796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.905823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.905911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.905938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.906037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.906063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.906218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.906245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.906337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.906364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.906454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.906482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.906606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.906650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.906793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.906820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.906939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.906966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.907070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.907097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.907242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.907269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.907357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.907385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.907468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.907495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.907605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.907632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.148 qpair failed and we were unable to recover it. 00:36:18.148 [2024-11-18 20:37:29.907738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.148 [2024-11-18 20:37:29.907765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.907884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.907912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.908027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.908054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.908209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.908235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.908363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.908391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.908499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.908525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.908621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.908657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.908757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.908785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.908902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.908930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.909048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.909076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.909200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.909228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.909355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.909382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.909492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.909519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.909644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.909672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.909752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.909779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.909920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.909947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.910096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.910124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.910207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.910243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.910361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.910388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.910467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.910495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.910607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.910645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.910789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.910816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.910936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.910963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.911046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.911073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.911194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.911220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.911336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.911364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.911445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.911472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.911546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.911573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.911693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.911721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.911837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.911864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.911953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.911980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.912139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.912166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.912283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.912311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.912430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.912457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.912591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.912618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.912740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.912769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.912859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.912886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.913015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.149 [2024-11-18 20:37:29.913042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.149 qpair failed and we were unable to recover it. 00:36:18.149 [2024-11-18 20:37:29.913161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.913188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.913327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.913356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.913485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.913512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.913661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.913689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.913783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.913811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.913899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.913927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.914018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.914046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.914203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.914230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.914319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.914347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.914470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.914498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.914610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.914649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.914784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.914820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.914918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.914947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.915099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.915127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.915227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.915254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.915335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.915361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.915445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.915472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.915586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.915612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.915701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.915729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.915843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.915870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.915949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.915976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.916099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.916126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.916219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.916262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.916350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.916378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.916500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.916527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.916649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.916677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.916801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.916828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.916945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.916972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.917056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.917083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.917202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.917231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.917351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.917380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.917499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.917526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.917660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.917688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.917802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.917829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.917943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.917971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.918116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.918143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.918248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.918275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.150 [2024-11-18 20:37:29.918359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.150 [2024-11-18 20:37:29.918394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.150 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.918534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.918562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.918705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.918732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.918844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.918871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.918988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.919022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.919157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.919184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.919273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.919301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.919442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.919470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.919577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.919604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.919754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.919783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.919897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.919933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.920028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.920067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.920223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.920256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.920342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.920370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.920490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.920517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.920667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.920695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.920792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.920819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.920931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.920969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.921086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.921114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.921221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.921249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.921365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.921393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.921542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.921568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.921704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.921733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.923647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.923678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.923811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.923839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.923961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.924004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.924124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.924151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.924233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.924261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.924363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.924390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.924479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.924505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.924608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.924646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.924741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.924769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.924865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.924893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.925007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.925036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.925135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.925162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.925282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.925310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.925432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.925470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.925582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.925610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.151 [2024-11-18 20:37:29.925766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.151 [2024-11-18 20:37:29.925793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.151 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.925922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.925949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.926082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.926111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.926193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.926229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.926324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.926353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.926465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.926493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.928648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.928680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.928781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.928811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.928984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.929012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.929164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.929191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.929275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.929303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.929424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.929453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.929561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.929588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.929695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.929722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.929846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.929874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.929992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.930019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.930137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.930164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.930309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.930336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.930436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.930463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.930553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.930582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.930708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.930736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.930831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.930859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.930939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.930965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.931083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.931110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.931228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.931255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.931408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.931438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.931563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.931591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.931716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.931748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.934649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.934682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.934829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.934858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.934957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.934984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.935110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.935137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.935262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.935302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.935447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.935475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.935595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.935622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.935751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.935780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.935907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.935940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.936085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.936114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.152 [2024-11-18 20:37:29.936271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.152 [2024-11-18 20:37:29.936299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.152 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.936407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.936435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.936517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.936544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.936632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.936666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.936825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.936863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.937011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.937046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.937185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.937220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.937388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.937427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.937559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.937596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.937714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.937750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.937865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.937901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.938053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.938093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.938218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.938255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.938460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.938497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.938650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.938686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.938799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.938833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.939002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.939048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.939191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.939238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.939406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.939446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.939565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.939597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.939724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.939752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.939866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.939893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.940019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.940046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.940146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.940172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.940311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.940338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.940461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.940487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.940583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.940610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.940737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.940765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.940884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.940911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.941025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.941057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.941181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.941209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.941299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.941327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.941450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.941488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.153 qpair failed and we were unable to recover it. 00:36:18.153 [2024-11-18 20:37:29.941601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.153 [2024-11-18 20:37:29.941628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.941753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.941783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.941875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.941903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.942029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.942061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.942184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.942216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.942336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.942363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.942446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.942472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.942593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.942620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.942767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.942794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.942879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.942906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.943013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.943040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.943124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.943161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.943265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.943292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.943397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.943425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.943516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.943542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.943655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.943684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.943787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.943814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.943892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.943918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.944005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.944031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.944171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.944198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.944289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.944315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.944425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.944451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.944571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.944598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.944711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.944738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.944835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.944862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.944999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.945025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.945126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.945153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.945247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.945273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.945350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.945391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.945507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.945533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.945617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.945652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.945744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.945777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.945889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.945916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.946038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.946064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.946145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.946172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.946258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.946296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.946412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.946439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.946537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.946567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.154 [2024-11-18 20:37:29.946659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.154 [2024-11-18 20:37:29.946687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.154 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.946828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.946869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.946996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.947023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.947110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.947137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.947212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.947238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.947334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.947361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.947483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.947510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.947620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.947658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.947747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.947774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.947896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.947922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.948060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.948087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.948172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.948207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.948313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.948339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.948457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.948484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.948582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.948609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.948823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.948850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.948948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.948975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.949084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.949126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.949224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.949251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.949335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.949361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.949442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.949469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.949593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.949620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.949779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.949808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.949998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.950025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.950131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.950158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.950269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.950296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.950387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.950418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.950558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.950585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.950698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.950726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.950855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.950886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.951003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.951029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.951121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.951147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.951259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.951285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.951399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.951426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.951567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.951593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.951687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.951715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.951867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.951905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.952017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.952044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.952232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.155 [2024-11-18 20:37:29.952258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.155 qpair failed and we were unable to recover it. 00:36:18.155 [2024-11-18 20:37:29.952376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.952403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.952534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.952561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.952702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.952729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.952817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.952843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.952928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.952955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.953049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.953075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.953163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.953190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.953317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.953343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.953431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.953457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.953553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.953583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.953742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.953770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.953930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.953958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.954056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.954083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.954195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.954222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.954329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.954371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.954458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.954485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.954643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.954670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.954790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.954818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.954937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.954963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.955068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.955099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.955219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.955248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.955366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.955393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.955556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.955583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.955667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.955694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.955790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.955822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.955945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.955981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.956065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.956091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.956226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.956253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.956389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.956415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.956528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.956555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.956669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.956695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.956775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.956801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.956919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.956948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.957094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.957121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.957200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.957226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.957339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.957366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.957463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.957495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.957574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.957600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.156 qpair failed and we were unable to recover it. 00:36:18.156 [2024-11-18 20:37:29.957755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.156 [2024-11-18 20:37:29.957782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.957890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.957916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.958048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.958075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.958192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.958219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.958342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.958369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.958488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.958515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.958633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.958666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.958810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.958837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.958924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.958950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.959044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.959071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.157 [2024-11-18 20:37:29.959072] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.959145] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:18.157 [2024-11-18 20:37:29.959222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.959260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.959383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.959410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.959499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.959525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.959607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.959633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.959744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.959770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.959867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.959900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.959995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.960024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.960134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.960161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.960254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.960281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.960374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.960401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.960514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.960541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.960650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.960678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.960783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.960810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.960895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.960922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.961038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.961065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.961181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.961208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.961358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.961385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.961499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.961526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.961669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.961698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.961836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.961878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.962022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.962063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.962160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.962189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.157 [2024-11-18 20:37:29.962304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.157 [2024-11-18 20:37:29.962331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.157 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.962447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.962480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.962600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.962647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.962737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.962765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.962862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.962889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.963004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.963030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.963158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.963185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.963281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.963308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.963399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.963426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.963537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.963565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.963715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.963746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.963842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.963870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.963982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.964010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.964150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.964177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.964266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.964292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.964410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.964438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.964516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.964542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.964629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.964662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.964814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.964840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.964960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.964986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.965072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.965100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.965184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.965211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.965334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.965361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.965471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.965498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.965594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.965621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.965738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.965765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.965868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.965894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.966021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.966047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.966129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.966155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.966267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.966293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.966387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.966413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.966499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.966525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.966600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.966642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.966748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.966774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.966862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.966889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.967035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.967061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.967147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.967174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.967320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.967360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.158 qpair failed and we were unable to recover it. 00:36:18.158 [2024-11-18 20:37:29.967529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.158 [2024-11-18 20:37:29.967564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.967698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.967734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.967856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.967884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.968028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.968054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.968196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.968222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.968301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.968338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.968430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.968456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.968568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.968595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.968689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.968717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.968826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.968852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.968994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.969020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.969136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.969164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.969314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.969340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.969462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.969489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.969602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.969628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.969724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.969751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.969836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.969863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.969956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.969982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.970100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.970127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.970222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.970249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.970344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.970370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.970468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.970519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.970650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.970679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.970801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.970829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.970934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.970962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.971048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.971083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.971207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.971235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.971328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.971355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.971441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.971469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.971589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.971615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.971719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.971747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.971837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.971864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.971949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.971987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.972095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.972122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.972235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.972262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.159 [2024-11-18 20:37:29.972384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.159 [2024-11-18 20:37:29.972410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.159 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.972484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.972511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.972669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.972709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.972827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.972856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.972997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.973030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.973143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.973170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.973289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.973328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.973449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.973476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.973596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.973623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.973757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.973784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.973863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.973895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.974006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.974032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.974146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.974172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.974260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.974287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.974401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.974428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.974546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.974573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.974657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.974685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.974808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.974836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.974962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.974989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.975080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.975108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.975188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.975214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.975298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.975325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.975478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.975505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.975641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.975669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.975783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.975809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.975922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.975949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.976034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.976061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.976188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.976214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.976303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.976329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.976466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.976493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.976603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.976657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.976763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.976802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.976952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.976981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.977112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.977139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.977263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.977290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.977411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.977439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.977557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.977589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.977708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.977736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.160 qpair failed and we were unable to recover it. 00:36:18.160 [2024-11-18 20:37:29.977833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.160 [2024-11-18 20:37:29.977860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.977973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.978000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.978117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.978144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.978259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.978285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.979121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.979152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.979272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.979300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.979420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.979451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.979566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.979593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.979734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.979775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.979878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.979906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.980025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.980052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.980142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.980170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.980249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.980276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.980383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.980411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.980524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.980551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.980664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.980691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.980775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.980802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.980885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.980912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.980992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.981019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.981136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.981164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.981329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.981356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.981510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.981537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.981647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.981675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.981785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.981812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.981935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.981962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.982061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.982089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.982205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.982233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.982363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.982391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.982493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.982520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.982644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.982672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.982790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.982816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.982911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.982942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.983030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.983073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.983201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.983230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.983358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.983388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.983538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.983570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.983702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.983729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.983819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.983846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.161 qpair failed and we were unable to recover it. 00:36:18.161 [2024-11-18 20:37:29.983933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.161 [2024-11-18 20:37:29.983961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.984048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.984076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.984216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.984243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.984361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.984389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.984499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.984526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.984614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.984663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.984752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.984779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.984868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.984895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.984980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.985012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.985105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.985134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.985216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.985244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.985384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.985411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.985527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.985554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.985647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.985676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.985761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.985788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.985868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.985896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.985982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.986009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.986121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.986148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.986251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.986279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.986361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.986387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.986496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.986524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.986649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.986677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.986801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.986828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.986941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.986969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.987066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.987094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.987185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.987213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.987345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.987384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.987588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.987616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.987715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.987744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.987857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.987884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.988013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.988040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.988178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.988205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.162 [2024-11-18 20:37:29.988287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.162 [2024-11-18 20:37:29.988314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.162 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.988440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.988467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.988594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.988649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.988769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.988803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.988922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.988958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.989051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.989078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.989185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.989212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.989303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.989329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.989466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.989494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.989579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.989607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.989720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.989748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.989828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.989856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.990015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.990042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.990183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.990210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.990320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.990348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.990486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.990513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.990630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.990665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.990788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.990815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.990933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.990961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.991103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.991130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.991218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.991246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.991362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.991389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.991473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.991500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.991587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.991613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.991737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.991764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.991874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.991901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.992011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.992037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.992184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.992210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.992318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.992345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.992465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.992493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.992591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.992624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.992739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.992766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.992867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.992894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.992989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.993014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.993140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.993166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.993252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.993277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.993425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.993452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.993538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.993563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.163 [2024-11-18 20:37:29.993679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.163 [2024-11-18 20:37:29.993704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.163 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.993797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.993821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.993958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.993983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.994124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.994150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.994246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.994271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.994398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.994430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.994577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.994602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.994710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.994736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.994851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.994876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.995026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.995051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.995128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.995152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.995265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.995291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.995438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.995462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.995590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.995615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.995749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.995788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.995902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.995931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.996029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.996056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.996146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.996174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.996318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.996345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.996468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.996495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.996607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.996650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.996763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.996790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.996895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.996935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.997060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.997089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.997229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.997255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.997347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.997375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.997496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.997523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.997660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.997700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.997824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.997852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.997947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.997975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.998116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.998143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.998232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.998260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.998404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.998444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.998583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.998611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.998717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.998745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.998836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.998863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.998980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.999007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.999149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.999176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.164 [2024-11-18 20:37:29.999297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.164 [2024-11-18 20:37:29.999324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.164 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:29.999441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:29.999468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:29.999554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:29.999581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:29.999687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:29.999714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:29.999831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:29.999858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:29.999957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:29.999984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.000101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.000128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.000245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.000271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.000388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.000417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.000500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.000529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.000656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.000684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.000800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.000827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.000910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.000941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.001069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.001095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.001227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.001255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.001388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.001423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.001531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.001566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.001696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.001733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.001832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.001865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.001997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.002025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.002111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.002138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.002234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.002262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.002404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.002431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.002517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.002544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.002672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.002700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.002789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.002816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.002910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.002941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.003055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.003082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.003171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.003198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.003315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.003342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.003450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.003478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.003618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.003655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.003745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.003772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.003866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.003904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.004000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.004034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.004117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.004147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.004271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.004299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.004419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.004447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.165 [2024-11-18 20:37:30.004539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.165 [2024-11-18 20:37:30.004567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.165 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.004685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.004713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.004812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.004839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.004924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.004958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.006362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.006405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.006517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.006551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.006649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.006677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.006773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.006799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.006905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.006945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.007107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.007136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.007259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.007286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.007427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.007455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.007560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.007601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.007741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.007770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.007859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.007887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.007978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.008007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.008120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.008147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.008264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.008291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.008367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.008394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.008502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.008528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.008623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.008660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.008775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.008803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.008892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.008920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.009008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.009035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.009166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.009192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.009321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.009362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.009489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.009518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.009647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.009676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.009770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.009798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.009890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.009917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.010030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.010058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.010138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.010172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.010288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.010314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.010422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.010448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.010530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.010557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.010675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.010705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.010842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.166 [2024-11-18 20:37:30.010888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.166 qpair failed and we were unable to recover it. 00:36:18.166 [2024-11-18 20:37:30.010985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.011014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.011134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.011161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.011276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.011314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.011427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.011454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.011570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.011598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.011694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.011722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.011804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.011831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.011922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.011956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.012080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.012108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.012221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.012251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.012345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.012372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.012487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.012514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.012628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.012674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.012799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.012826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.012908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.012944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.013052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.013078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.013153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.013179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.013293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.013318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.013465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.013494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.013615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.013662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.013752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.013781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.013864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.013892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.014048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.014076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.014158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.014185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.014330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.014359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.014479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.014508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.014624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.014667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.014807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.014834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.014919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.014953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.015040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.015067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.015151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.015179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.015334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.015361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.015469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.015497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.015654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.015682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.167 [2024-11-18 20:37:30.015788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.167 [2024-11-18 20:37:30.015815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.167 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.015931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.015968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.016053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.016080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.016231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.016259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.016344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.016373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.016504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.016545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.016669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.016709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.016868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.016907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.017070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.017098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.017185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.017213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.017356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.017383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.017474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.017502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.017613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.017652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.017775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.017802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.017884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.017911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.018055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.018082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.018189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.018215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.018303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.018330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.018425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.018452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.018577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.018617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.018734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.018775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.018926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.018955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.019073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.019102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.019221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.019247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.019389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.019416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.019506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.019532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.019644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.019671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.019754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.019781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.019856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.019883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.020006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.020032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.020148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.020174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.020289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.020316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.020422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.020463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.020594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.020634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.020745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.020773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.020865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.020892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.021014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.021042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.021160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.021198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.021313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.168 [2024-11-18 20:37:30.021341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.168 qpair failed and we were unable to recover it. 00:36:18.168 [2024-11-18 20:37:30.021424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.021451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.021558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.021599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.021745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.021774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.021883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.021910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.022020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.022057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.022179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.022206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.022294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.022320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.022446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.022474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.022608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.022666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.022787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.022816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.022933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.022961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.023055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.023084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.023182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.023209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.023321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.023349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.023458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.023486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.023603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.023648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.023736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.023763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.023873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.023901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.023990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.024018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.024111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.024151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.024275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.024308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.024397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.024423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.024503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.024530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.024688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.024715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.024827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.024853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.024942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.024969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.025081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.025107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.025220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.025248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.025397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.025426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.025569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.025609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.025752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.025781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.025899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.025926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.026046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.026072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.026184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.026211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.026331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.026357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.026457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.026498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.026610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.026675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.026821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.169 [2024-11-18 20:37:30.026850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.169 qpair failed and we were unable to recover it. 00:36:18.169 [2024-11-18 20:37:30.026999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.027026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.027117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.027144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.027249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.027276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.027359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.027387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.027513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.027554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.027668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.027697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.027814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.027842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.027954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.027981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.028088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.028114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.028233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.028262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.028350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.028377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.028482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.028523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.028619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.028658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.028776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.028804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.028929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.028958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.029097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.029124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.029252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.029281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.029420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.029447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.029535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.029564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.029687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.029713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.029803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.029829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.029952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.029978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.030090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.030123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.030232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.030258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.030376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.030404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.030524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.030552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.030629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.030663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.030752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.030779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.030855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.030883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.031019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.031047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.031156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.031184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.031303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.031329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.031417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.031443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.031530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.031555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.031665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.031693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.031815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.031840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.031946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.170 [2024-11-18 20:37:30.031973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.170 qpair failed and we were unable to recover it. 00:36:18.170 [2024-11-18 20:37:30.032119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.032146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.032263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.032289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.032373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.032400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.032559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.032599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.032739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.032779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.032891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.032920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.032994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.033022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.033137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.033164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.033255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.033282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.033403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.033431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.033565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.033594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.033688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.033717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.033827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.033862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.033940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.033967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.034047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.034074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.034192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.034221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.034298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.034325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.034446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.034486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.034574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.034612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.034706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.034735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.034859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.034892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.034994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.035027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.035167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.035204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.035331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.035372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.035494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.035536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.035675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.035714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.035833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.035878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.035985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.036014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.036119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.036150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.036262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.036296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.036444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.036474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.036563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.036591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.036696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.036723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.036813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.036842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.036932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.036959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.171 [2024-11-18 20:37:30.037046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.171 [2024-11-18 20:37:30.037074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.171 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.037188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.037214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.037327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.037355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.037443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.037477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.037618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.037656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.037776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.037804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.037891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.037919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.038035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.038062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.038168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.038209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.038298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.038327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.038439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.038466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.038550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.038577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.038713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.038754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.038840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.038868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.038985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.039011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.039125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.039153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.039267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.039293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.039434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.039467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.039668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.039696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.039813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.039839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.039975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.040001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.040084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.040110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.040212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.040237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.040344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.040369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.040449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.040478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.040611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.040660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.040748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.040777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.040859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.040886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.040997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.041024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.041144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.041171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.041286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.041314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.041412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.041438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.041560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.041600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.041698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.041726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.041838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.041865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.041975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.042003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.042115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.042142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.042246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.172 [2024-11-18 20:37:30.042287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.172 qpair failed and we were unable to recover it. 00:36:18.172 [2024-11-18 20:37:30.042404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.042432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.042519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.042545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.042650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.042678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.042793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.042820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.042894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.042921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.043011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.043036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.043176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.043207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.043366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.043406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.043529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.043557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.043675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.043703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.043814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.043841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.043981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.044008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.044122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.044150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.044261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.044288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.044406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.044431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.044506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.044532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.044645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.044671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.044780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.044806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.044899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.044924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.044992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:18.173 [2024-11-18 20:37:30.045043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.045078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.045193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.045220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.045337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.045363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.045473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.045499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.045595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.045622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.045763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.045790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.045879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.045906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.045993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.046024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.046144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.046171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.046254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.046281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.046361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.046388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.046491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.046530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.046625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.046665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.046755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.046782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.046899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.046926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.047012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.047043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.047137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.047166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.047254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.047282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.173 qpair failed and we were unable to recover it. 00:36:18.173 [2024-11-18 20:37:30.047375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.173 [2024-11-18 20:37:30.047403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.047497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.047537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.047625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.047662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.047758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.047786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.047903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.047931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.048044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.048071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.048155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.048183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.048290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.048318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.048393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.048421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.048562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.048597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.048718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.048746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.048830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.048858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.049003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.049032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.049148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.049176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.049291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.049318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.049433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.049460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.049600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.049626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.049747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.049774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.049868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.049896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.050009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.050036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.050158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.050186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.050306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.050334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.050445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.050472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.050592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.050619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.050742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.050769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.050851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.050877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.050954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.050981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.051100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.051127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.051241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.051269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.051369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.051412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.051520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.051548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.051659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.051699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.051806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.051833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.051922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.051950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.052097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.052123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.052204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.052231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.052352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.052381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.052485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.052516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.052648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.052678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.052790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.052817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.052930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.174 [2024-11-18 20:37:30.052957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.174 qpair failed and we were unable to recover it. 00:36:18.174 [2024-11-18 20:37:30.053098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.053126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.053242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.053270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.053359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.053386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.053489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.053530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.053661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.053690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.053783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.053810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.053923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.053948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.054060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.054086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.054248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.054280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.054398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.054426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.054513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.054540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.054617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.054651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.054767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.054794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.054883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.054910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.055003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.055030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.055149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.055177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.055293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.055321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.055438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.055466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.055558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.055586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.055729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.055762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.055883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.055911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.056032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.056058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.056178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.056205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.056331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.056357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.056453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.056481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.056607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.056654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.056779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.056808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.056946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.056973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.057089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.057115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.057226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.057254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.057362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.057388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.057489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.057530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.057674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.057714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.057815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.057845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.057938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.057965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.058112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.058145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.058258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.058284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.058396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.058422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.058572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.058603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-11-18 20:37:30.058711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-11-18 20:37:30.058742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.058834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.058862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.058951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.058978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.059088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.059115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.059232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.059259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.059372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.059399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.059518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.059545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.059644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.059671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.059754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.059780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.059900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.059927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.060019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.060047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.060134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.060161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.060256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.060283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.060443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.060483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.060605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.060633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.060760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.060786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.060875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.060903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.061014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.061040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.061178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.061214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.061359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.061385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.061489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.061541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.061676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.061717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.061815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.061843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.061963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.061991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.062101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.062129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.062241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.062268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.062350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.062379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.062505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.062536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.062657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.062688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.062783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.062811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.062888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.062916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.063028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.063056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.063195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.063223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.063320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.063349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.063438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.063470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.063562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.063589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-11-18 20:37:30.063693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-11-18 20:37:30.063728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.063842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.063869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.063986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.064013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.064132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.064159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.064272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.064300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.064409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.064436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.064520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.064550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.064643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.064672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.064790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.064819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.064946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.064974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.065092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.065120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.065237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.065266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.065361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.065390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.065494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.065535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.065690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.065719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.065829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.065856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.065942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.065969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.066078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.066105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.066190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.066217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.066355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.066382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.066491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.066518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.066640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.066672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.066793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.066822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.066914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.066942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.067081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.067109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.067220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.067249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.067360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.067388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.067504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.067532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.067651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.067681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.067826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.067854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.067968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.067995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.068114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.068143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.068231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.068259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.068373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-11-18 20:37:30.068401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-11-18 20:37:30.068501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.068529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.068670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.068696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.068815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.068841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.068956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.068982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.069094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.069120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.069209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.069239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.069328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.069357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.069510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.069551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.069650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.069681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.069795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.069823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.069939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.069969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.070057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.070085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.070170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.070197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.070307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.070335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.070451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.070479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.070567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.070595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.070726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.070760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.070856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.070883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.070997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.071024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.071140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.071167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.071290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.071322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.071424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.071465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.071588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.071627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.071761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.071788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.071929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.071956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.072082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.072108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.072226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.072255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.072378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.072407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.072500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.072526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.072611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.072646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.072789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.072816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-11-18 20:37:30.072909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-11-18 20:37:30.072936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.073030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.073057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.073141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.073173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.073295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.073322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.073399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.073426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.073563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.073590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.073682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.073710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.073795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.073821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.073917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.073948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.074050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.074078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.074169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.074196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.074314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.074342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.074455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.074482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.074607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.074657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.074784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.074812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.074927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.074953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.075046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.075071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.075190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.075215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.075311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.075338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.075426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.075453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.075548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.075577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.075692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.075732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.075839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.075880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.075977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.076005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.076124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.076160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.076304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.076332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.076478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.076506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.076621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.076656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.076749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.076778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.076895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.076928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.077021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.077050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.077130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.077158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.077266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.077293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.077406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.077435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.077542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.077583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.077715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.077745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.077833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.077862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.077940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-11-18 20:37:30.077968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-11-18 20:37:30.078074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.078101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.078200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.078240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.078361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.078389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.078507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.078537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.078628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.078664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.078784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.078811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.078891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.078919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.079012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.079039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.079155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.079184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.079301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.079329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.079437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.079465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.079552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.079580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.079667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.079697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.079788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.079815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.079903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.079931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.080041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.080068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.080179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.080206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.080287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.080315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.080406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.080435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.080550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.080581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.080681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.080710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.080824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.080851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.080968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.080996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.081081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.081109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.081195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.081224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.081314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.081342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.081474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.081514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.081601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.081629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.081734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.081767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.081886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.081912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.081992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.082018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.082105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.082138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-11-18 20:37:30.082244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-11-18 20:37:30.082272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.082360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.082386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.082471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.082498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.082607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.082653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.082738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.082765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.082851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.082878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.082953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.082980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.083063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.083091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.083209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.083237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.083364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.083393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.083513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.083543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.083668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.083695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.083782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.083809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.083896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.083924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.084039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.084066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.084205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.084235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.084336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.084365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.084474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.084514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.084650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.084680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.084795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.084823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.084910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.084937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.085051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.085078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.085196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.085225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.085338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.085367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.085506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.085547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.085666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.085696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.085794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.085823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.085939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.085965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.086077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.086104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.086221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.086248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.086332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.086359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.086499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.086526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.086662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.086704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.086827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.086856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.086944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.086971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.087085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.087112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.087229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.087256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.087375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-11-18 20:37:30.087405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-11-18 20:37:30.087509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.087536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.087649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.087682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.087804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.087832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.087921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.087954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.088116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.088156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.088276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.088305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.088420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.088449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.088543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.088570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.088662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.088691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.088785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.088812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.088892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.088920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.089031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.089058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.089179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.089208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.089296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.089324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.089451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.089493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.089621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.089656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.089771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.089797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.089887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.089914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.090019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.090046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.090136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.090162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.090251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.090278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.090404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.090445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.090543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.090571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.090685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.090715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.090804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.090832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.090944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.090972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.091115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.091143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.091232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.091259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.091375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.091408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.091529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.091556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.091708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.091736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.091829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.091855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.091936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.091961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.092054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.092079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.092158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.092185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.092274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.092300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.092389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.092415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.092517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-11-18 20:37:30.092543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-11-18 20:37:30.092616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.092648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.092744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.092770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.092861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.092886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.092977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.093002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.093098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.093136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.093234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.093275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.093361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.093391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.093481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.093508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.093594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.093621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.093715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.093742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.093831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.093858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.093936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.093962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.094042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.094069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.094151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.094176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.094256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.094284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.094376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.094407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.094497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.094526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.094629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:18.183 [2024-11-18 20:37:30.094648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.094674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:18.183 [2024-11-18 20:37:30.094680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.183 [2024-11-18 20:37:30.094691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.094704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:18.183 [2024-11-18 20:37:30.094714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:18.183 [2024-11-18 20:37:30.094771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.094798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.094914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.094942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.095023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.095049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.095125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.095153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.095260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.095286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.095374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.095403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.095524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.095552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.095645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.095675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.095817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.095844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.095927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.095954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.096068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.096095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.096177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.096205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.096237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:18.183 [2024-11-18 20:37:30.096304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.096333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.096288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:18.183 [2024-11-18 20:37:30.096334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:18.183 [2024-11-18 20:37:30.096338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:18.183 [2024-11-18 20:37:30.096415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.096443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.096584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.096611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.096718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.096745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.096828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-11-18 20:37:30.096856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-11-18 20:37:30.096946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.096973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.097059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.097087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.097178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.097205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.097317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.097345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.097453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.097479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.097571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.097601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.097717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.097757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.097847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.097875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.097963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.097991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.098098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.098126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.098207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.098235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.098365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.098406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.098502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.098532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.098620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.098656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.098750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.098786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.098876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.098903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.099014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.099041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.099155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.099181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.099268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.099298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.099386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.099415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.099500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.099528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.099614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.099651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.099769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.099797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.099876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.099903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.099986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.100014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.100124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.100152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.100266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.100295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.100412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.100441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.100550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.100577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.100660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.100688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.100776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.100802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.100923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.100956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.101070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.101103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.101188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.101216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.101331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.101358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.101435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.101462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.101574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.101601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-11-18 20:37:30.101720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-11-18 20:37:30.101747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.101832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.101859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.101966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.101993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.102073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.102101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.102211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.102238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.102357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.102397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.102501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.102541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.102626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.102660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.102748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.102776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.102872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.102900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.102983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.103010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.103087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.103115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.103215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.103243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.103326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.103353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.103437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.103465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.103603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.103656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.103746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.103773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.103889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.103918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.104025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.104051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.104170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.104210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.104304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.104331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.104420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.104447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.104538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.104571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.104658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.104685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.104775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.104801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.104883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.104911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.104992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.105019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.105111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.105140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.105220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.105251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.105363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.105404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.105491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.105518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.105600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.105627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-11-18 20:37:30.105723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-11-18 20:37:30.105751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.105831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.105858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.105960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.105986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.106062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.106088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.106175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.106200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.106287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.106315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.106395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.106423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.106512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.106541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.106661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.106688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.106766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.106793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.106880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.106907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.107046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.107072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.107155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.107181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.107264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.107291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.107375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.107402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.107480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.107508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.107590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.107618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.107727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.107756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.107853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.107883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.107979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.108006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.108110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.108142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.108269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.108295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.108371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.108397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.108485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.108514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.108627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.108663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.108760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.108788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.108881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.108907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.109002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.109028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.109114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.109141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.109275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.109301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.109382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.109414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.109503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.109544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.109646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.109675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.109756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.109782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.109867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.109894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.109976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.110002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.110083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.110120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.110236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.110263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-11-18 20:37:30.110354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-11-18 20:37:30.110380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.110465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-11-18 20:37:30.110490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.110575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-11-18 20:37:30.110603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.110774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-11-18 20:37:30.110817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.110902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-11-18 20:37:30.110930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.111019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-11-18 20:37:30.111046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.111126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-11-18 20:37:30.111153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.111250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-11-18 20:37:30.111289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.111404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-11-18 20:37:30.111433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.111545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-11-18 20:37:30.111573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.111693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-11-18 20:37:30.111719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.111820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-11-18 20:37:30.111846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.111928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-11-18 20:37:30.111956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.112069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-11-18 20:37:30.112097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.112180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-11-18 20:37:30.112206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-11-18 20:37:30.112288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.112317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.112409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.112437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.112572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.112613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.112748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.112777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.112863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.112896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.112982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.113008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.113121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.113147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.113261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.113286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.113367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.113393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.113504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.113530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.113665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.113698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.113781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.113809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.113902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.113931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.114085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.114112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.114228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.114256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.114341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.114368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.114495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.114522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.114645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.114685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.114816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.114846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.114926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.114953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.115063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.115091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.115207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.115234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.115319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.115347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.115422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.115449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.115589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.115616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.115713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.115742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.115832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.115858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.115937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.115963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.116084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.472 [2024-11-18 20:37:30.116111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.472 qpair failed and we were unable to recover it. 00:36:18.472 [2024-11-18 20:37:30.116226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.116252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.116339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.116371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.116461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.116493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.116607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.116653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.116740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.116768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.116857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.116885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.117003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.117030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.117108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.117135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.117243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.117271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.117385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.117412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.117498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.117525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.117651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.117681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.117768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.117796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.117875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.117901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.117984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.118011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.118128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.118162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.118277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.118316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.118405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.118433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.118574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.118601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.118691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.118719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.118802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.118829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.118915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.118942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.119030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.119057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.119167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.119195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.119273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.119301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.119385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.119412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.119492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.119519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.119663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.119691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.119774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.119800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.119885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.119912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.119987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.120013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.120130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.120159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.473 qpair failed and we were unable to recover it. 00:36:18.473 [2024-11-18 20:37:30.120244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.473 [2024-11-18 20:37:30.120272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.120353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.120382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.120468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.120494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.120575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.120601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.120716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.120743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.120832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.120859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.120946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.120973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.121053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.121080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.121190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.121218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.121327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.121354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.121461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.121507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.121599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.121627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.121755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.121781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.121863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.121888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.121995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.122021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.122138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.122164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.122248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.122275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.122358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.122385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.122462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.122489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.122573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.122600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.122691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.122718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.122804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.122831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.122948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.122975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.123057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.123085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.123203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.123230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.123341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.123370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.123473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.123518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.123625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.123675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.123772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.123801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.123921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.123948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.124033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.124059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.124172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.124199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.124299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.124326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.124434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.124464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.124609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.124652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.124738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.124765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.124843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.474 [2024-11-18 20:37:30.124871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.474 qpair failed and we were unable to recover it. 00:36:18.474 [2024-11-18 20:37:30.125033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.125066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.125182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.125208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.125297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.125324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.125434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.125463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.125594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.125634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.125766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.125794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.125872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.125898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.125986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.126012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.126106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.126134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.126218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.126246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.126338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.126367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.126454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.126482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.126601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.126629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.126732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.126760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.126863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.126890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.126972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.126999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.127115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.127142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.127255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.127282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.127357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.127385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.127506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.127547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.127650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.127691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.127777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.127806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.127903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.127930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.128011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.128037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.128134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.128172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.128252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.128279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.128401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.128428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.128531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.128572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.128667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.475 [2024-11-18 20:37:30.128697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.475 qpair failed and we were unable to recover it. 00:36:18.475 [2024-11-18 20:37:30.128790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.128831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.128932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.128962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.129080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.129107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.129223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.129251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.129335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.129362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.129437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.129464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.129548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.129577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.129694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.129723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.129815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.129845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.129922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.129949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.130052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.130078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.130194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.130226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.130309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.130341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.130421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.130447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.130539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.130579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.130733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.130762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.130852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.130881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.130966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.130993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.131102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.131129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.131237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.131264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.131337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.131365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.131446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.131473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.131619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.131654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.131745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.131773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.131897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.131927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.132024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.132051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.132158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.132185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.132263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.132290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.132380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.132406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.132526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.132552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.132663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.476 [2024-11-18 20:37:30.132692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.476 qpair failed and we were unable to recover it. 00:36:18.476 [2024-11-18 20:37:30.132819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.132847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.132929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.132956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.133033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.133060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.133169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.133207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.133319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.133345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.133458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.133485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.133566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.133593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.133699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.133739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.133822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.133850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.133937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.133964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.134053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.134078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.134188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.134216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.134310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.134338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.134429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.134457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.134595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.134621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.134725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.134753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.134834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.134862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.134949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.134978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.135084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.135111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.135197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.135225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.135341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.135368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.135464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.135492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.135607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.135644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.135763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.135791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.135873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.135899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.135999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.136027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.136102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.136127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.477 [2024-11-18 20:37:30.136214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.477 [2024-11-18 20:37:30.136243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.477 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.136403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.136430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.136509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.136536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.136657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.136685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.136814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.136841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.136952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.136979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.137101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.137128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.137219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.137246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.137329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.137355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.137464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.137490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.137598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.137626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.137722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.137749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.137825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.137852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.137968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.137995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.138078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.138105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.138189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.138218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.138327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.138353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.138434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.138461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.138548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.138574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.138665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.138705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.138845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.138880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.138981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.139009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.139088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.139115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.139201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.139228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.139338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.139366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.139447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.139475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.139605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.139650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.139745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.139774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.139857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.139885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.139983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.140011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.140124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.140151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.140230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.140258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.478 qpair failed and we were unable to recover it. 00:36:18.478 [2024-11-18 20:37:30.140356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.478 [2024-11-18 20:37:30.140396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.140485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.140512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.140626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.140660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.140739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.140765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.140842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.140869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.140946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.140972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.141061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.141087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.141170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.141196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.141313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.141342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.141420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.141448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.141529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.141558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.141675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.141703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.141787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.141814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.141931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.141958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.142043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.142071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.142157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.142185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.142270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.142298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.142388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.142415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.142519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.142546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.142650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.142678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.142765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.142793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.142885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.142919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.142996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.143023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.143132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.143159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.143257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.143285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.143397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.143423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.143507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.143534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.143615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.143649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.143736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.143763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.143842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.143869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.143956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.143983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.144094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.144121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.144198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.144225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.479 [2024-11-18 20:37:30.144330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.479 [2024-11-18 20:37:30.144357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.479 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.144484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.144524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.144652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.144681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.144763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.144788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.144879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.144905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.144986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.145022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.145133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.145162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.145246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.145274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.145379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.145419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.145564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.145592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.145691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.145717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.145799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.145825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.145911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.145937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.146044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.146071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.146149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.146174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.146247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.146273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.146345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.146371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.146492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.146520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.146611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.146647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.146747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.146776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.146860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.146887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.146963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.146990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.147068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.147100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.147193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.147220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.147308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.147337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.147446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.147487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.147573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.147600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.147699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.147726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.147802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.147828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.147910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.147937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.148011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.148038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.148114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.148140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.148229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.148256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-11-18 20:37:30.148355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-11-18 20:37:30.148396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.148493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.148532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.148629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.148668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.148775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.148803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.148889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.148915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.148991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.149017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.149139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.149167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.149251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.149277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.149374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.149415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.149500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.149529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.149628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.149670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.149768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.149795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.149912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.149947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.150027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.150053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.150167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.150195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.150281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.150308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.150384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.150417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.150543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.150569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.150690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.150721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.150803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.150840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.150956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.150983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.151071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.151098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.151188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.151217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.151323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.151350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.151468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.151496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.151617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.151653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.151732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.151759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.151873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.151900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.151985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.152012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.152093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.152120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.152203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-11-18 20:37:30.152233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-11-18 20:37:30.152315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.152342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.152416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.152443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.152520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.152552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.152647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.152676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.152770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.152797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.152881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.152909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.153025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.153053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.153133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.153160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.153275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.153303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.153390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.153416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.153500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.153527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.153603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.153629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.153734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.153761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.153875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.153902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.153986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.154013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.154090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.154117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.154227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.154258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.154365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.154393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.154535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.154576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.154666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.154695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.154777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.154805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.154889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.154916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.155003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.155030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.155113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.155142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-11-18 20:37:30.155220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-11-18 20:37:30.155246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.155366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.155397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.155511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.155538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.155633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.155666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.155751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.155777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.155872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.155899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.156014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.156040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.156125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.156163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.156254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.156281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.156359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.156386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.156478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.156508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.156600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.156629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.156724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.156758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.156836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.156863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.156945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.156973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.157091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.157117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.157204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.157242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.157341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.157369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.157463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.157493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.157578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.157606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.157715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.157743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.157824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.157851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.157942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.157970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.158078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.158105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.158184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.158212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.158293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.158320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.158423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.158457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.158536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.158563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.158655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.158682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.158765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.158791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.158885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.158914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.159010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.159044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.159122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.159149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-11-18 20:37:30.159232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-11-18 20:37:30.159258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.159370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.159402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.159512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.159541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.159670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.159699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.159811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.159837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.159912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.159948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.160032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.160060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.160140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.160168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.160279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.160309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.160404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.160432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.160522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.160549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.160647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.160676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.160791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.160818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.160904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.160935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.161048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.161077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.161155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.161184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.161305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.161332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.161413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.161440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.161523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.161550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.161668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.161696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.161790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.161829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.161911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.161938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.162059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.162087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.162177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.162203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.162312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.162339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.162453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.162480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.162561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.162590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.162679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.162717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.162803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.162831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.162912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-11-18 20:37:30.162948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-11-18 20:37:30.163028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.163055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.163167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.163195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.163285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.163312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.163398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.163425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.163543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.163571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.163653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.163683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.163774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.163800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.164029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.164058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.164149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.164177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.164291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.164316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.164394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.164434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.164522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.164550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.164663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.164703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.164795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.164824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.164909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.164936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.165061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.165087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.165161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.165199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.165281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.165308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.165393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.165427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.165509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.165549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.165661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.165690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.165770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.165798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.165882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.165910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.166026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.166054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.166146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.166173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.166290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.166318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.166432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.166458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.166538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.166565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.166650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.166686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.166773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.166800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.166919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.166948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.167031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.167057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.167143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.167171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.167291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.167319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-11-18 20:37:30.167400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-11-18 20:37:30.167428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.167509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.167536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.167647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.167675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.167758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.167787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.167866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.167892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.167979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.168008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.168090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.168119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.168234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.168262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.168380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.168408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.168493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.168520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.168647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.168675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.168753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.168781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.168861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.168887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.168968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.168995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.169072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.169099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.169193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.169222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.169350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.169377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.169488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.169524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.169603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.169630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.169733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.169760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.169837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.169864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.169952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.169980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.170076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.170116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.170228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.170263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.170360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.170388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.170473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.170500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.170611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.170653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.170742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.170770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.170855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.170882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.170963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.170991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.171075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.171104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-11-18 20:37:30.171251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-11-18 20:37:30.171280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.171372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.171410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.171492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.171520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.171614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.171647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.171762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.171791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.171906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.171944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.172031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.172057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.172143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.172170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.172279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.172316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.172434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.172460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.172551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.172579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.172669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.172698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.172781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.172808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.172912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.172943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.173057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.173086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.173169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.173197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.173274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.173302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.173382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.173423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.173514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.173544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.173673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.173701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.173781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.173813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.173892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.173919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.174037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.174063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.174156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.174184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.174294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.174322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.174406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.174433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.174530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.174565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.174690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.174717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.174803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.174830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.174939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.174967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.175044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-11-18 20:37:30.175072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-11-18 20:37:30.175175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.175215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.175309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.175337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.175431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.175471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.175566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.175596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.175706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.175751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.175863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.175890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.175971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.175998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.176077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.176104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.176196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.176223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.176337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.176364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.176461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.176488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.176596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.176623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.176715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.176742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.176833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.176861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.176957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.176984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.177057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.177084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.177179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.177209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.177298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.177326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.177414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.177443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.177581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.177609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.177712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.177740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.177868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.177895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.177990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.178017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.178105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.178133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.178246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.178274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.178353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.178379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.178477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.178504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-11-18 20:37:30.178581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-11-18 20:37:30.178619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.178741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.178768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.178853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.178885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.179004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.179032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.179119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.179156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.179235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.179263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.179348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.179377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.179463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.179499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.179600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.179661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.179760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.179788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.179898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.179925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.180010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.180043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.180122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.180149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.180243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.180270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.180384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.180413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.180506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.180534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.180626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.180673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.180763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.180801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.180879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.180906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.180983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.181010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.181136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.181165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.181261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.181288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.181397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.181437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.181588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.181616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.181727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.181756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.181842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.181870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.181966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.181995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.182075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.182104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.182194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.182232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.182314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.182353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.182506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.182533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.182612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.182657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.182750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-11-18 20:37:30.182777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-11-18 20:37:30.182857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.182886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.182968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.182994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.183085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.183113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.183198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.183225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.183306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.183332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.183415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.183447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.183554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.183581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.183664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.183691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.183771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.183797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.183903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.183934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.184022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.184049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.184128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.184161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.184244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.184272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.184348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.184374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.184456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.184483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.184577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.184607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.184692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.184729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.184808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.184845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.184929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.184955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.185032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.185060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.185173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.185212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.185306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.185334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.185415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.185443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.185564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.185592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.185718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.185745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.185827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.185855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.185947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.185976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.186063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.186092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.186186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.186214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.186293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.186320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.186435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.186474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.186566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.186593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.186697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-11-18 20:37:30.186726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-11-18 20:37:30.186821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.186847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.186965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.186992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.187110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.187138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.187243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.187282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.187380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.187409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.187500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.187528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.187617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.187659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.187752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.187779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.187870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.187897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.187978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.188004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.188095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.188122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.188234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.188261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.188377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.188410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.188503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.188530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.188660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.188689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.188799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.188827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.188908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.188940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.189027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.189055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.189132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.189171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.189250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.189277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.189366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.189394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.189512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.189546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.189655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.189683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.189768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.189795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.189879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.189906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.189990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.190017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.190167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.190194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.190305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.190333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.190419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.190446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.190528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.190557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.190665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.190703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.190815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.190842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.190930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-11-18 20:37:30.190957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-11-18 20:37:30.191037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.191063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.191149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.191175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.191261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.191289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.191411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.191439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.191526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.191553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.191647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.191675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.191764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.191791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.191867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.191894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.191975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.192010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.192126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.192154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.192249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.192288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.192415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.192442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.192535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.192562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.192669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.192697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.192777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.192805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.192893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.192920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.192997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.193023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.193158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.193184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.193266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.193294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.193425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.193466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.193559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.193588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.193682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.193711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.193791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.193819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.193907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.193940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.194049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.194077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.194153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.194183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.194260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.194287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.194377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.194417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.194528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.194556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.194648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.194676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.194752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.194779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.194857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.194884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.194970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.194997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-11-18 20:37:30.195090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-11-18 20:37:30.195118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.195202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.195230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.195372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.195399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.195484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.195511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.195600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.195627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.195765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.195793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.195868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.195895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.195973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.196000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.196111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.196138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.196215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.196243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.196334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.196361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.196445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.196471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.196584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.196613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.196727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.196757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.196848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.196875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.196961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.196989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.197069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.197096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.197209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.197241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.197334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.197362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.197455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.197482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.197569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.197597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.197687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.197715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.197818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.197845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.197967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.197994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.198080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.198118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.198196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.198225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.198316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.198356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-11-18 20:37:30.198439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-11-18 20:37:30.198466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.198559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.198597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.198687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.198717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.198812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.198839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.198964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.198995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.199116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.199149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.199238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.199264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.199377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.199405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.199492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.199519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.199601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.199628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.199727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.199755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.199868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.199896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.199984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.200018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.200099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.200126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.200205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.200239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.200329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.200369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.200490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.200518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.200642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.200670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.200747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.200774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.200858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.200884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.201085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.201124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.201202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.201228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.201308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.201341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.201418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.201445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.201530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.201558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.201644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.201672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.201789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.201817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.201891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.201918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.202001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.202028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.202119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.202146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.202236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.202269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.202377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.202405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.202481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.202508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.202592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-11-18 20:37:30.202619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-11-18 20:37:30.202721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.202749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.202832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.202859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.202947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.202974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.203086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.203113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.203190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.203217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.203296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.203323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.203448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.203474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.203599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.203650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.203736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.203764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.203856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.203883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.203973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.204007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.204131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.204171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.204267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.204296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.204387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.204414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.204492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.204518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.204632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.204669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.204753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.204780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.204859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.204885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.204968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.204995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.205077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.205111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.205194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.205222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.205341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.205386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.205471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.205499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.205595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.205623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.205721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.205748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.205856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.205883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.205966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.205993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.206093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.206121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.206209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.206244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.206328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.206355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.206430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.206457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.206543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.206578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.206693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-11-18 20:37:30.206722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-11-18 20:37:30.206806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.206840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.206926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.206953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.207040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.207066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.207143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.207187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.207269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.207296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.207389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.207418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.207530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.207558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.207652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.207683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.207772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.207798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.207877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.207910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.208004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.208029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.208113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.208140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.208239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.208267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.208349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.208387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.208478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.208505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.208617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.208649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.208731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.208757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.208845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.208872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.208992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.209018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.209100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.209126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.209206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.209234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.209352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.209380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.209464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.209493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.209575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.209603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.209708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.209736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.209862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.209889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.209969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.209997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.210082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.210109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.210196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.210224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.210303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.210330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.210418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.210449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.210533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.210560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.210672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-11-18 20:37:30.210701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-11-18 20:37:30.210792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.210819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.210910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.210947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.211040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.211066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.211140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.211167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.211241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.211268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.211375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.211401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.211478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.211504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.211587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.211616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.211740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.211768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.211848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.211876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.211960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.211997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.212080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.212107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.212187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.212215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.212297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.212335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.212736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.212777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.212871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.212899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.213310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.213347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.213460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.213487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.213573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.213601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.213694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.213722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.213822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.213849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.213927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.213954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:18.497 [2024-11-18 20:37:30.214064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.214092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.214166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:18.497 [2024-11-18 20:37:30.214193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.214316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.214343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:18.497 [2024-11-18 20:37:30.214427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.214464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:18.497 [2024-11-18 20:37:30.214568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.214595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.497 [2024-11-18 20:37:30.214676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.214703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.214785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.214818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-11-18 20:37:30.214895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-11-18 20:37:30.214922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.214995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.215028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.215164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.215191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.215271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.215296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.215378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.215403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.215482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.215508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.215655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.215695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.215791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.215831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.215946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.215974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.216071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.216097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.216175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.216200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.216291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.216327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.216411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.216438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.216546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.216573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.216677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.216705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.216781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.216807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.216919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.216946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.217058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.217084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.217213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.217244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.217363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.217390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.217483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.217517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.217598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.217633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.217730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.217756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.217837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.217864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.217941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.217967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.218048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.218074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.218153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.218179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.218261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.218286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.218395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.218422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.218552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-11-18 20:37:30.218581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-11-18 20:37:30.218678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.218705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.218817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.218845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.218926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.218959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.219052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.219080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.219198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.219226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.219350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.219377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.219462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.219489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.219604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.219631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.219730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.219757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.219840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.219866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.219947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.219972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.220081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.220108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.220214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.220240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.220318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.220343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.220420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.220446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.220523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.220549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.220644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.220681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.220770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.220796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.220877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.220902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.220983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.221010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.221118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.221145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.221230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.221257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.221331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.221358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.221470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.221496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.221607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.221634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.221721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.221746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-11-18 20:37:30.221824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-11-18 20:37:30.221850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.221937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.221962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.222053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.222080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.222189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.222227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.222320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.222349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.222465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.222493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.222573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.222600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.222687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.222713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.222793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.222820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.222908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.222934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.223068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.223094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.223176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.223202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.223318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.223345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.223440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.223467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.223579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.223606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.223690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.223716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.223806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.223831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.223913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.223942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.224022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.224047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.224157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.224187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.224306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.224333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.224424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.224450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.224538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.224565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.224647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.224676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.224767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.224807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.224898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.224934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.225052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.225091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.225174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.225201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.225326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.225353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.225432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.225458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.225537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.225569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.225687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.225715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.225797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.225822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.225897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.225924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.226010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-11-18 20:37:30.226036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-11-18 20:37:30.226119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.226145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.226233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.226262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.226376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.226402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.226483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.226510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.226585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.226610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.226709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.226734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.226844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.226870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.226954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.226979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.227056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.227081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.227166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.227191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.227271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.227296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.227428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.227469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.227569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.227609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.227717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.227746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.227830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.227859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.227939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.227966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.228075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.228103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.228220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.228248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.228360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.228387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.228535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.228562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.228652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.228678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.228787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.228813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.228893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.228919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.229030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.229068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.229157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.229182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.229274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.229301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.229380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.229406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.229504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.229541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.229662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.229691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.229771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.229797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.229889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.229914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.229995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.230020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.230107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.230143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.230227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.230253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.230330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-11-18 20:37:30.230356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-11-18 20:37:30.230430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.230461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.230545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.230573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.230667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.230712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.230811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.230840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.230917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.230949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.231032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.231068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.231147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.231173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.231257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.231290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.231365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.231400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.231507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.231533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.231616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.231658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.231743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.231769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.231847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.231872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.231958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.231984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.232069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.232105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.232217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.232249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.232358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.232385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.232495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.232524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.232608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.232649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.232728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.232753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:18.502 [2024-11-18 20:37:30.232844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.232870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.233009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:18.502 [2024-11-18 20:37:30.233036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.233125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.233153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.233252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.233278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.502 [2024-11-18 20:37:30.233367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.233407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.233534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.233584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.233678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.233705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.233785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.233810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.233916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.233951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.234068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.234104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.234216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.234244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.234358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.234388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.234482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.234510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-11-18 20:37:30.234647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-11-18 20:37:30.234675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.234782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.234809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.234889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.234917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.235011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.235036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.235113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.235138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.235220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.235246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.235371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.235399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.235509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.235536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.235614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.235649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.235740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.235766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.235872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.235899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.235979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.236004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.236092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.236119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.236200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.236227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.236339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.236366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.236461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.236493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.236574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.236600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.236793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.236822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.236900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.236925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.237029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.237055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.237171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.237197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.237279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.237306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.237396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.237422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.237514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.237553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.237663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.237692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.237778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.237806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.237889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.237916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.238042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.238070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.238152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.238177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.238253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.238280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.238360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.238385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.238496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.238524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.238676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-11-18 20:37:30.238709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-11-18 20:37:30.238788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.238814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.238929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.238966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.239040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.239067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.239192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.239231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.239338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.239377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.239458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.239484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.239573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.239601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.239685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.239712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.239788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.239813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.239904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.239930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.240015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.240041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.240246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.240290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.240391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.240420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.240547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.240588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.240708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.240739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.240822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.240849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.240955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.240982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.241076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.241103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.241196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.241224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.241312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.241341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.241450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.241477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.241565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.241594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.241700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.241727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.241809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.241834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.241920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.241950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.242033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.242059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.242173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.242200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.242278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.242305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.242422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.242449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.242558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.242588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.242722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-11-18 20:37:30.242760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-11-18 20:37:30.242847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.242873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.242962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.242987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.243075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.243101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.243177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.243203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.243344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.243372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.243454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.243482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.243566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.243592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.243685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.243713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.243800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.243833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.243911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.243950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.244042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.244067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.244176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.244204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.244329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.244356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.244441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.244468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.244582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.244610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.244744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.244772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.244878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.244905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.244985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.245017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.245101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.245128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.245221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.245248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.245366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.245394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.245509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.245536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.245634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.245674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.245762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.245794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.245869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.245895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.245983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.246023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.246102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.246128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.246211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.246237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.246354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.246381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.246477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.246503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.246587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.246614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.246710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.246737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.246822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.246850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.246932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.246969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.247077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.247104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-11-18 20:37:30.247194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-11-18 20:37:30.247220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.247310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.247336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.247449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.247476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.247586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.247613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.247712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.247748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.247843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.247872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.247993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.248018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.248127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.248167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.248270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.248297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.248409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.248436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.248576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.248604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.248720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.248759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.248853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.248883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.248973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.249015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.249107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.249133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.249270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.249297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.249410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.249437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.249521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.249551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.249625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.249662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.249777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.249804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.249888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.249913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.250015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.250040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.250138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.250163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.250271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.250298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.250409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.250446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.250537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.250562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.250668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.250696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.250805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.250832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.250907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.250933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.251037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.251064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.251175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.251201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.251281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.251307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.251387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.251413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.251552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.251579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.251684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.251723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.251809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.251835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.251938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.251978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.252078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.252107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.252193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.252223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.252350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.252376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.252468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.252504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.252585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.252611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.252704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-11-18 20:37:30.252730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-11-18 20:37:30.252839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.252864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.252945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.252972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.253088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.253116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.253201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.253229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.253331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.253381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.253470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.253497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.253581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.253608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.253729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.253756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.253837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.253863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.253953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.253979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.254094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.254126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.254205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.254230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.254340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.254367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.254456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.254484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.254560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.254588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.254730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.254759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.254841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.254868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.254973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.255000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.255112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.255139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.255245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.255273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.255362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.255387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.255469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.255495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.255580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.255608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.255749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.255776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.255875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.255916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.256051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.256080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.256176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.256203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.256285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.256312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.256422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.256450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.256546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.256586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.256682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.256711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.256827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.256857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.256954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.256980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.257093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.257131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.257243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-11-18 20:37:30.257270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-11-18 20:37:30.257353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.257381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.257497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.257526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.257623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.257668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.257752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.257780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.257905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.257944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.258059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.258087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.258174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.258202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.258310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.258337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.258430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.258471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.258598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.258644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.258732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.258758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.258836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.258861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.258947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.258973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.259069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.259094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.259174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.259200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.259312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.259340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.259418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.259445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.259551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.259578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.259671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.259699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.259815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.259844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.259972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.260019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.260144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.260171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.260255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.260280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.260403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.260430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.260556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.260596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.260689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.260716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.260802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.260827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.260946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.260973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.261100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.261127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.261219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.261244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.261351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.261384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.261499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.261529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.261620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.261661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.261740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-11-18 20:37:30.261764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-11-18 20:37:30.261841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.261865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.261974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.262004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.262093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.262121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.262250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.262290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.262417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.262456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.262547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.262575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.262681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.262707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.262788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.262814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.262894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.262924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.263039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.263063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.263160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.263185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.263280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.263307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.263418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.263443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.263528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.263562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.263700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.263732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.263847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.263875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.264024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.264051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.264167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.264195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.264318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.264362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.264446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.264474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.264672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.264701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.264783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.264807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.264925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.264950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.265032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.265055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.265144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.265169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.265246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.265271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.265364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.265387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.265518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.265558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.265664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.265692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.265804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.265832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.265949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.265975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.266060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.266087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.266163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.266189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.266303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.266329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.266405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.266431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.266517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.266550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.266752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.266780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.266863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.266888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.266975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.267001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.267116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.267142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-11-18 20:37:30.267236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-11-18 20:37:30.267276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.267362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.267400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.267486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.267512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.267608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.267651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.267754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.267781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.267857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.267882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.267986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.268012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.268151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.268191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.268313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.268342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.268465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.268492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.268574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.268600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.268696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.268724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.268840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.268868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.268964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.268999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.269114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.269142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.269220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.269250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.269443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.269469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.269581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.269610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.269777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.269817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.269908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.269941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.270065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.270093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.270207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.270234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.270352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.270382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.270497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.270524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.270620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.270671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.270793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.270822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.270908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.270946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.271078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.271103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.271212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.271240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.271326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.271353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.271470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.271499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.271616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.271662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.271746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.271771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.271884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.271920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.272075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.272103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.272179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.272210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.272322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-11-18 20:37:30.272350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-11-18 20:37:30.272434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.272461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.272594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.272666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.272788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.272817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.272906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.272934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.273048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.273076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.273194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.273221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.273324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.273372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.273466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.273493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.273584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.273622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.273731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.273759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.273837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.273863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.273984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.274011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.274109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.274136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.274247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.274284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.274374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.274401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.274487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.274516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.274657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.274686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.274778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.274805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.274916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.274943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.275060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.275088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.275185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.275215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.275342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.275377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.275457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.275489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.275575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.275601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.275691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.275718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.275805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.275836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.275934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.275959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.276040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.276072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.276151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.276178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 Malloc0 00:36:18.511 [2024-11-18 20:37:30.276305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.276344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.276437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.276462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.276566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.276606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.511 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.276715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.276749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:18.511 [2024-11-18 20:37:30.276836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.276863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.511 [2024-11-18 20:37:30.276980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.277010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.511 [2024-11-18 20:37:30.277097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.277125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.277236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.277263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.277361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.277389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.277512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.277542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.277630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.277663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.277758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.277785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.277892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.277920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.278018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.278046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.278147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-11-18 20:37:30.278187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-11-18 20:37:30.278276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.278304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.278388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.278425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.278540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.278566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.278671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.278697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.278787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.278814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.278894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.278930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.279015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.279044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.279130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.279157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.279272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.279300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.279410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.279450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.279545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.279570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.279665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.279710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.279795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.279820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.279892] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.512 [2024-11-18 20:37:30.279910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.279938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.280026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.280051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.280143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.280168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.280283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.280309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.280450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.280479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.280569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.280596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.280698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.280732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.280815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.280839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.280965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.280991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.281072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.281096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.281177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.281207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.281304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.281333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.281422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.281450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.281538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.281566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.281668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.281694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.281808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.281836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.281920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.281959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.282079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.282113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.282222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.282251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.282370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.282397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.282496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.282524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.282613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.282670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.282761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.282785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.282877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.282902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.283038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.283064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.283195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.283220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.283307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.283336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.283428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.283455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.283610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.283669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.283762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.283791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-11-18 20:37:30.283885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-11-18 20:37:30.283912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.283996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.284023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.284111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.284147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.284254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.284292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.284391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.284421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.284541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.284570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.284688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.284715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.284803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.284827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.284911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.284948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.285034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.285058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.285187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.285216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.285323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.285351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.285475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.285502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.285613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.285647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.285769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.285796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.285881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.285907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.286028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.286063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.286152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.286177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.286253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.286278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.286398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.286423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.286528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.286554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.286646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.286675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.286800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.286830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.286921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.286951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.287038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.287068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.287211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.287239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.287359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.287389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.287483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.287509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.287658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.287699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.287797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.287825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.287915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.287953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.288044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.288069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.513 [2024-11-18 20:37:30.288146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.288172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-11-18 20:37:30.288259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.288296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:18.513 [2024-11-18 20:37:30.288374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.288399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.513 [2024-11-18 20:37:30.288484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-11-18 20:37:30.288512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.514 [2024-11-18 20:37:30.288627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.288671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.288755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.288781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.288866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.288892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.289016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.289042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.289152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.289180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.289270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.289301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.289391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.289420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.289531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.289558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.289678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.289706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.289789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.289816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.289899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.289924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.290062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.290097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.290207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.290241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.290337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.290362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.290442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.290468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.290550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.290576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.290719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.290759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.290850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.290878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.290968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.290996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.291120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.291147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.291266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.291294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.291431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.291471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.291566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.291605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.291719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.291746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.291831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.291858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.291964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.291991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.292085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.292111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.292221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.292248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.292342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.292371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.292480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.292508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.292630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.292665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.292754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.292781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.292880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.292920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.293013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.293049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.293132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.293159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.293278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.293304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.293406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.293446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.293543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.293569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.293678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.293704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.293789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.293815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-11-18 20:37:30.293894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-11-18 20:37:30.293920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.294011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.294037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.294158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.294185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.294295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.294326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.294414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.294440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.294516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.294541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.294664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.294693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.294777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.294804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.294889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.294927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.295038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.295072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.295183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.295209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.295299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.295336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.295415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.295441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.295521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.295546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.295652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.295678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.295788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.295815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.295891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.295916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.295998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.296024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.296115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.515 [2024-11-18 20:37:30.296141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.296244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.296271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:18.515 [2024-11-18 20:37:30.296354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.296391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.515 [2024-11-18 20:37:30.296485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.296512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.515 [2024-11-18 20:37:30.296598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.296624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.296724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.296751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.296832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.296857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.296932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.296957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.297079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.297107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.297209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.297249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.297387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.297429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.297516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.297545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.297626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.297667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.297756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.297803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.297894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.297933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.298053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.298081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.298168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.298193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.298304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.298331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.298412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.298440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.298563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.298603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.298702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.298731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-11-18 20:37:30.298846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-11-18 20:37:30.298874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.298953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.298979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.299088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.299114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.299193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.299218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.299319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.299360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.299460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.299500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.299585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.299614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.299717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.299745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.299826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.299853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.299956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.299981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.300090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.300117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.300239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.300270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.300388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.300415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.300506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.300535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.300618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.300661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.300756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.300798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.300890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.300931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.301079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.301107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.301202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.301229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.301315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.301344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.301457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.301484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.301582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.301621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.301728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.301755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.301845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.301872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.301962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.301987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.302103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.302129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.302214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.302240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.302333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.302367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.302443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.302468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.302552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.302580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.302679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.302709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.302819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.302846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.302941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.302968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.303053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.303079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.303159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.303186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.303273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.303300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.303424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.303453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.303541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.303569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe694000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.303662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.303690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.303776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.303803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.303890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.303926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.304011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.304039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.304116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.304145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.516 [2024-11-18 20:37:30.304231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.304257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:18.516 [2024-11-18 20:37:30.304365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.304392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-11-18 20:37:30.304464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.516 [2024-11-18 20:37:30.304489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.516 [2024-11-18 20:37:30.304608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-11-18 20:37:30.304633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.304718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.304743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.304834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.304860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.304998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.305024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.305106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.305133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.305244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.305270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.305382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.305408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.305493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.305518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.305630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.305664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.305751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.305779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.305862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.305900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.305987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.306014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.306119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.306145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.306258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.306285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.306379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.306414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.306493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.306517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.306629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.306662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.306747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.306773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a0000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.306862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.306901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.306996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.307033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1671b40 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.307151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.307178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.307265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.307292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.307376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.307403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.307496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.307524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.307606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.307634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.307723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.307755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.307874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.307901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.307995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.517 [2024-11-18 20:37:30.308021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe698000b90 with addr=10.0.0.2, port=4420 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.308182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:18.517 [2024-11-18 20:37:30.310796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.517 [2024-11-18 20:37:30.310913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.517 [2024-11-18 20:37:30.310943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.517 [2024-11-18 20:37:30.310968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.517 [2024-11-18 20:37:30.310984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.517 [2024-11-18 20:37:30.311029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.517 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:18.517 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.517 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.517 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.517 20:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 404494 00:36:18.517 [2024-11-18 20:37:30.320517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.517 [2024-11-18 20:37:30.320619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.517 [2024-11-18 20:37:30.320668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.517 [2024-11-18 20:37:30.320684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.517 [2024-11-18 20:37:30.320696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.517 [2024-11-18 20:37:30.320727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.330558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.517 [2024-11-18 20:37:30.330662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.517 [2024-11-18 20:37:30.330688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.517 [2024-11-18 20:37:30.330703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.517 [2024-11-18 20:37:30.330715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.517 [2024-11-18 20:37:30.330746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.517 qpair failed and we were unable to recover it. 00:36:18.517 [2024-11-18 20:37:30.340566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.517 [2024-11-18 20:37:30.340679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.517 [2024-11-18 20:37:30.340705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.517 [2024-11-18 20:37:30.340720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.518 [2024-11-18 20:37:30.340732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.518 [2024-11-18 20:37:30.340763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.518 qpair failed and we were unable to recover it. 00:36:18.518 [2024-11-18 20:37:30.350525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.518 [2024-11-18 20:37:30.350649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.518 [2024-11-18 20:37:30.350675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.518 [2024-11-18 20:37:30.350701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.518 [2024-11-18 20:37:30.350713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.518 [2024-11-18 20:37:30.350744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.518 qpair failed and we were unable to recover it. 00:36:18.518 [2024-11-18 20:37:30.360522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.518 [2024-11-18 20:37:30.360614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.518 [2024-11-18 20:37:30.360650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.518 [2024-11-18 20:37:30.360669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.518 [2024-11-18 20:37:30.360690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.518 [2024-11-18 20:37:30.360722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.518 qpair failed and we were unable to recover it. 00:36:18.518 [2024-11-18 20:37:30.370630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.518 [2024-11-18 20:37:30.370759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.518 [2024-11-18 20:37:30.370790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.518 [2024-11-18 20:37:30.370805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.518 [2024-11-18 20:37:30.370818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.518 [2024-11-18 20:37:30.370849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.518 qpair failed and we were unable to recover it. 00:36:18.518 [2024-11-18 20:37:30.380585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.518 [2024-11-18 20:37:30.380687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.518 [2024-11-18 20:37:30.380713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.518 [2024-11-18 20:37:30.380727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.518 [2024-11-18 20:37:30.380746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.518 [2024-11-18 20:37:30.380775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.518 qpair failed and we were unable to recover it. 00:36:18.518 [2024-11-18 20:37:30.390736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.518 [2024-11-18 20:37:30.390830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.518 [2024-11-18 20:37:30.390856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.518 [2024-11-18 20:37:30.390875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.518 [2024-11-18 20:37:30.390899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.518 [2024-11-18 20:37:30.390931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.518 qpair failed and we were unable to recover it. 00:36:18.518 [2024-11-18 20:37:30.400671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.518 [2024-11-18 20:37:30.400763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.518 [2024-11-18 20:37:30.400788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.518 [2024-11-18 20:37:30.400803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.518 [2024-11-18 20:37:30.400815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.518 [2024-11-18 20:37:30.400847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.518 qpair failed and we were unable to recover it. 00:36:18.518 [2024-11-18 20:37:30.410677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.518 [2024-11-18 20:37:30.410787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.518 [2024-11-18 20:37:30.410811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.518 [2024-11-18 20:37:30.410836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.518 [2024-11-18 20:37:30.410849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.518 [2024-11-18 20:37:30.410880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.518 qpair failed and we were unable to recover it. 00:36:18.518 [2024-11-18 20:37:30.420697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.518 [2024-11-18 20:37:30.420788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.518 [2024-11-18 20:37:30.420813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.518 [2024-11-18 20:37:30.420828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.518 [2024-11-18 20:37:30.420840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.518 [2024-11-18 20:37:30.420870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.518 qpair failed and we were unable to recover it. 00:36:18.518 [2024-11-18 20:37:30.430726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.518 [2024-11-18 20:37:30.430817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.518 [2024-11-18 20:37:30.430841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.518 [2024-11-18 20:37:30.430856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.518 [2024-11-18 20:37:30.430868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.518 [2024-11-18 20:37:30.430898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.518 qpair failed and we were unable to recover it. 00:36:18.518 [2024-11-18 20:37:30.440730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.518 [2024-11-18 20:37:30.440817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.518 [2024-11-18 20:37:30.440841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.518 [2024-11-18 20:37:30.440855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.518 [2024-11-18 20:37:30.440867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.518 [2024-11-18 20:37:30.440897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.518 qpair failed and we were unable to recover it. 00:36:18.518 [2024-11-18 20:37:30.450871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.518 [2024-11-18 20:37:30.450953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.518 [2024-11-18 20:37:30.450978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.518 [2024-11-18 20:37:30.450992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.518 [2024-11-18 20:37:30.451004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.518 [2024-11-18 20:37:30.451035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.518 qpair failed and we were unable to recover it. 00:36:18.779 [2024-11-18 20:37:30.460829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.779 [2024-11-18 20:37:30.460923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.779 [2024-11-18 20:37:30.460948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.779 [2024-11-18 20:37:30.460963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.779 [2024-11-18 20:37:30.460976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.779 [2024-11-18 20:37:30.461006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.779 qpair failed and we were unable to recover it. 00:36:18.779 [2024-11-18 20:37:30.470941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.779 [2024-11-18 20:37:30.471033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.779 [2024-11-18 20:37:30.471057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.779 [2024-11-18 20:37:30.471086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.779 [2024-11-18 20:37:30.471099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.779 [2024-11-18 20:37:30.471129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.779 qpair failed and we were unable to recover it. 00:36:18.779 [2024-11-18 20:37:30.480898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.779 [2024-11-18 20:37:30.480992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.779 [2024-11-18 20:37:30.481017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.779 [2024-11-18 20:37:30.481032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.779 [2024-11-18 20:37:30.481044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.779 [2024-11-18 20:37:30.481089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.779 qpair failed and we were unable to recover it. 00:36:18.779 [2024-11-18 20:37:30.490912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.779 [2024-11-18 20:37:30.491044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.779 [2024-11-18 20:37:30.491069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.779 [2024-11-18 20:37:30.491083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.779 [2024-11-18 20:37:30.491097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.779 [2024-11-18 20:37:30.491126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.779 qpair failed and we were unable to recover it. 00:36:18.779 [2024-11-18 20:37:30.500946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.779 [2024-11-18 20:37:30.501040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.779 [2024-11-18 20:37:30.501065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.779 [2024-11-18 20:37:30.501080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.779 [2024-11-18 20:37:30.501092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.779 [2024-11-18 20:37:30.501122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.779 qpair failed and we were unable to recover it. 00:36:18.779 [2024-11-18 20:37:30.510989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.779 [2024-11-18 20:37:30.511076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.779 [2024-11-18 20:37:30.511101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.779 [2024-11-18 20:37:30.511115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.779 [2024-11-18 20:37:30.511128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.779 [2024-11-18 20:37:30.511158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.779 qpair failed and we were unable to recover it. 00:36:18.779 [2024-11-18 20:37:30.521000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.779 [2024-11-18 20:37:30.521090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.780 [2024-11-18 20:37:30.521115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.780 [2024-11-18 20:37:30.521130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.780 [2024-11-18 20:37:30.521142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.780 [2024-11-18 20:37:30.521172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.780 qpair failed and we were unable to recover it. 00:36:18.780 [2024-11-18 20:37:30.531092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.780 [2024-11-18 20:37:30.531180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.780 [2024-11-18 20:37:30.531205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.780 [2024-11-18 20:37:30.531220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.780 [2024-11-18 20:37:30.531232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.780 [2024-11-18 20:37:30.531262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.780 qpair failed and we were unable to recover it. 00:36:18.780 [2024-11-18 20:37:30.541045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.780 [2024-11-18 20:37:30.541138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.780 [2024-11-18 20:37:30.541163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.780 [2024-11-18 20:37:30.541184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.780 [2024-11-18 20:37:30.541197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.780 [2024-11-18 20:37:30.541227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.780 qpair failed and we were unable to recover it. 00:36:18.780 [2024-11-18 20:37:30.551075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.780 [2024-11-18 20:37:30.551160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.780 [2024-11-18 20:37:30.551185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.780 [2024-11-18 20:37:30.551200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.780 [2024-11-18 20:37:30.551212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.780 [2024-11-18 20:37:30.551242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.780 qpair failed and we were unable to recover it. 00:36:18.780 [2024-11-18 20:37:30.561074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.780 [2024-11-18 20:37:30.561158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.780 [2024-11-18 20:37:30.561184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.780 [2024-11-18 20:37:30.561199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.780 [2024-11-18 20:37:30.561211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.780 [2024-11-18 20:37:30.561241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.780 qpair failed and we were unable to recover it. 00:36:18.780 [2024-11-18 20:37:30.571142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.780 [2024-11-18 20:37:30.571230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.780 [2024-11-18 20:37:30.571255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.780 [2024-11-18 20:37:30.571270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.780 [2024-11-18 20:37:30.571282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.780 [2024-11-18 20:37:30.571312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.780 qpair failed and we were unable to recover it. 00:36:18.780 [2024-11-18 20:37:30.581279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.780 [2024-11-18 20:37:30.581403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.780 [2024-11-18 20:37:30.581427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.780 [2024-11-18 20:37:30.581441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.780 [2024-11-18 20:37:30.581454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.780 [2024-11-18 20:37:30.581489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.780 qpair failed and we were unable to recover it. 00:36:18.780 [2024-11-18 20:37:30.591173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.780 [2024-11-18 20:37:30.591260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.780 [2024-11-18 20:37:30.591286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.780 [2024-11-18 20:37:30.591300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.780 [2024-11-18 20:37:30.591313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.780 [2024-11-18 20:37:30.591344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.780 qpair failed and we were unable to recover it. 00:36:18.780 [2024-11-18 20:37:30.601242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.780 [2024-11-18 20:37:30.601335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.780 [2024-11-18 20:37:30.601361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.780 [2024-11-18 20:37:30.601375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.780 [2024-11-18 20:37:30.601388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.780 [2024-11-18 20:37:30.601418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.780 qpair failed and we were unable to recover it. 00:36:18.780 [2024-11-18 20:37:30.611212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.780 [2024-11-18 20:37:30.611312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.780 [2024-11-18 20:37:30.611337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.780 [2024-11-18 20:37:30.611351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.780 [2024-11-18 20:37:30.611363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.780 [2024-11-18 20:37:30.611394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.780 qpair failed and we were unable to recover it. 00:36:18.780 [2024-11-18 20:37:30.621300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.780 [2024-11-18 20:37:30.621393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.780 [2024-11-18 20:37:30.621419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.780 [2024-11-18 20:37:30.621433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.780 [2024-11-18 20:37:30.621445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.780 [2024-11-18 20:37:30.621498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.780 qpair failed and we were unable to recover it. 00:36:18.780 [2024-11-18 20:37:30.631309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.780 [2024-11-18 20:37:30.631431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.780 [2024-11-18 20:37:30.631457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.780 [2024-11-18 20:37:30.631471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.780 [2024-11-18 20:37:30.631484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.780 [2024-11-18 20:37:30.631514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.780 qpair failed and we were unable to recover it. 00:36:18.780 [2024-11-18 20:37:30.641325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.780 [2024-11-18 20:37:30.641416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.780 [2024-11-18 20:37:30.641441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.780 [2024-11-18 20:37:30.641455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.780 [2024-11-18 20:37:30.641468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.780 [2024-11-18 20:37:30.641498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.780 qpair failed and we were unable to recover it. 00:36:18.780 [2024-11-18 20:37:30.651345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.780 [2024-11-18 20:37:30.651469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.781 [2024-11-18 20:37:30.651494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.781 [2024-11-18 20:37:30.651509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.781 [2024-11-18 20:37:30.651522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.781 [2024-11-18 20:37:30.651553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.781 qpair failed and we were unable to recover it. 00:36:18.781 [2024-11-18 20:37:30.661387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.781 [2024-11-18 20:37:30.661479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.781 [2024-11-18 20:37:30.661504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.781 [2024-11-18 20:37:30.661519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.781 [2024-11-18 20:37:30.661532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.781 [2024-11-18 20:37:30.661562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.781 qpair failed and we were unable to recover it. 00:36:18.781 [2024-11-18 20:37:30.671418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.781 [2024-11-18 20:37:30.671508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.781 [2024-11-18 20:37:30.671538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.781 [2024-11-18 20:37:30.671553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.781 [2024-11-18 20:37:30.671565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.781 [2024-11-18 20:37:30.671596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.781 qpair failed and we were unable to recover it. 00:36:18.781 [2024-11-18 20:37:30.681518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.781 [2024-11-18 20:37:30.681597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.781 [2024-11-18 20:37:30.681644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.781 [2024-11-18 20:37:30.681661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.781 [2024-11-18 20:37:30.681688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.781 [2024-11-18 20:37:30.681719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.781 qpair failed and we were unable to recover it. 00:36:18.781 [2024-11-18 20:37:30.691465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.781 [2024-11-18 20:37:30.691544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.781 [2024-11-18 20:37:30.691569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.781 [2024-11-18 20:37:30.691584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.781 [2024-11-18 20:37:30.691597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.781 [2024-11-18 20:37:30.691627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.781 qpair failed and we were unable to recover it. 00:36:18.781 [2024-11-18 20:37:30.701491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.781 [2024-11-18 20:37:30.701593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.781 [2024-11-18 20:37:30.701618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.781 [2024-11-18 20:37:30.701632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.781 [2024-11-18 20:37:30.701654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.781 [2024-11-18 20:37:30.701686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.781 qpair failed and we were unable to recover it. 00:36:18.781 [2024-11-18 20:37:30.711515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.781 [2024-11-18 20:37:30.711606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.781 [2024-11-18 20:37:30.711630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.781 [2024-11-18 20:37:30.711654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.781 [2024-11-18 20:37:30.711673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.781 [2024-11-18 20:37:30.711705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.781 qpair failed and we were unable to recover it. 00:36:18.781 [2024-11-18 20:37:30.721553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.781 [2024-11-18 20:37:30.721646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.781 [2024-11-18 20:37:30.721672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.781 [2024-11-18 20:37:30.721687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.781 [2024-11-18 20:37:30.721700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.781 [2024-11-18 20:37:30.721742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.781 qpair failed and we were unable to recover it. 00:36:18.781 [2024-11-18 20:37:30.731582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.781 [2024-11-18 20:37:30.731675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.781 [2024-11-18 20:37:30.731704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.781 [2024-11-18 20:37:30.731718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.781 [2024-11-18 20:37:30.731731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.781 [2024-11-18 20:37:30.731763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.781 qpair failed and we were unable to recover it. 00:36:18.781 [2024-11-18 20:37:30.741664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.781 [2024-11-18 20:37:30.741768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.781 [2024-11-18 20:37:30.741792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.781 [2024-11-18 20:37:30.741807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.781 [2024-11-18 20:37:30.741820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.781 [2024-11-18 20:37:30.741851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.781 qpair failed and we were unable to recover it. 00:36:18.781 [2024-11-18 20:37:30.751739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.781 [2024-11-18 20:37:30.751831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.781 [2024-11-18 20:37:30.751856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.781 [2024-11-18 20:37:30.751870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.781 [2024-11-18 20:37:30.751883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.781 [2024-11-18 20:37:30.751914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.781 qpair failed and we were unable to recover it. 00:36:18.781 [2024-11-18 20:37:30.761683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.781 [2024-11-18 20:37:30.761769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.781 [2024-11-18 20:37:30.761794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.781 [2024-11-18 20:37:30.761809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.781 [2024-11-18 20:37:30.761821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.781 [2024-11-18 20:37:30.761852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.781 qpair failed and we were unable to recover it. 00:36:18.781 [2024-11-18 20:37:30.771691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.781 [2024-11-18 20:37:30.771777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.781 [2024-11-18 20:37:30.771802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.781 [2024-11-18 20:37:30.771816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.781 [2024-11-18 20:37:30.771829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.781 [2024-11-18 20:37:30.771859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.781 qpair failed and we were unable to recover it. 00:36:18.781 [2024-11-18 20:37:30.781733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.781 [2024-11-18 20:37:30.781820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.782 [2024-11-18 20:37:30.781845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.782 [2024-11-18 20:37:30.781859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.782 [2024-11-18 20:37:30.781872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:18.782 [2024-11-18 20:37:30.781902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.782 qpair failed and we were unable to recover it. 00:36:19.043 [2024-11-18 20:37:30.791851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.043 [2024-11-18 20:37:30.791944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.043 [2024-11-18 20:37:30.791969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.043 [2024-11-18 20:37:30.791983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.043 [2024-11-18 20:37:30.791995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.043 [2024-11-18 20:37:30.792026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.043 qpair failed and we were unable to recover it. 00:36:19.043 [2024-11-18 20:37:30.801783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.043 [2024-11-18 20:37:30.801910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.043 [2024-11-18 20:37:30.801942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.043 [2024-11-18 20:37:30.801958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.043 [2024-11-18 20:37:30.801970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.043 [2024-11-18 20:37:30.802000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.043 qpair failed and we were unable to recover it. 00:36:19.043 [2024-11-18 20:37:30.811846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.043 [2024-11-18 20:37:30.811930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.043 [2024-11-18 20:37:30.811958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.043 [2024-11-18 20:37:30.811972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.043 [2024-11-18 20:37:30.811985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.043 [2024-11-18 20:37:30.812015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.043 qpair failed and we were unable to recover it. 00:36:19.043 [2024-11-18 20:37:30.821864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.043 [2024-11-18 20:37:30.821958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.043 [2024-11-18 20:37:30.821982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.043 [2024-11-18 20:37:30.821996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.043 [2024-11-18 20:37:30.822008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.043 [2024-11-18 20:37:30.822038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.043 qpair failed and we were unable to recover it. 00:36:19.043 [2024-11-18 20:37:30.831942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.043 [2024-11-18 20:37:30.832032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.043 [2024-11-18 20:37:30.832057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.043 [2024-11-18 20:37:30.832071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.043 [2024-11-18 20:37:30.832084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.043 [2024-11-18 20:37:30.832114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.043 qpair failed and we were unable to recover it. 00:36:19.043 [2024-11-18 20:37:30.841888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.043 [2024-11-18 20:37:30.841976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.043 [2024-11-18 20:37:30.842000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.043 [2024-11-18 20:37:30.842015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.044 [2024-11-18 20:37:30.842041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.044 [2024-11-18 20:37:30.842072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.044 qpair failed and we were unable to recover it. 00:36:19.044 [2024-11-18 20:37:30.851939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.044 [2024-11-18 20:37:30.852032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.044 [2024-11-18 20:37:30.852056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.044 [2024-11-18 20:37:30.852071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.044 [2024-11-18 20:37:30.852083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.044 [2024-11-18 20:37:30.852114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.044 qpair failed and we were unable to recover it. 00:36:19.044 [2024-11-18 20:37:30.861988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.044 [2024-11-18 20:37:30.862083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.044 [2024-11-18 20:37:30.862108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.044 [2024-11-18 20:37:30.862123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.044 [2024-11-18 20:37:30.862136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.044 [2024-11-18 20:37:30.862167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.044 qpair failed and we were unable to recover it. 00:36:19.044 [2024-11-18 20:37:30.872064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.044 [2024-11-18 20:37:30.872150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.044 [2024-11-18 20:37:30.872174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.044 [2024-11-18 20:37:30.872188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.044 [2024-11-18 20:37:30.872201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.044 [2024-11-18 20:37:30.872232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.044 qpair failed and we were unable to recover it. 00:36:19.044 [2024-11-18 20:37:30.882089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.044 [2024-11-18 20:37:30.882176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.044 [2024-11-18 20:37:30.882204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.044 [2024-11-18 20:37:30.882219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.044 [2024-11-18 20:37:30.882232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.044 [2024-11-18 20:37:30.882263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.044 qpair failed and we were unable to recover it. 00:36:19.044 [2024-11-18 20:37:30.892022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.044 [2024-11-18 20:37:30.892111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.044 [2024-11-18 20:37:30.892135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.044 [2024-11-18 20:37:30.892149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.044 [2024-11-18 20:37:30.892162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.044 [2024-11-18 20:37:30.892192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.044 qpair failed and we were unable to recover it. 00:36:19.044 [2024-11-18 20:37:30.902066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.044 [2024-11-18 20:37:30.902160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.044 [2024-11-18 20:37:30.902184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.044 [2024-11-18 20:37:30.902198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.044 [2024-11-18 20:37:30.902211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.044 [2024-11-18 20:37:30.902241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.044 qpair failed and we were unable to recover it. 00:36:19.044 [2024-11-18 20:37:30.912093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.044 [2024-11-18 20:37:30.912183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.044 [2024-11-18 20:37:30.912208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.044 [2024-11-18 20:37:30.912222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.044 [2024-11-18 20:37:30.912235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.044 [2024-11-18 20:37:30.912265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.044 qpair failed and we were unable to recover it. 00:36:19.044 [2024-11-18 20:37:30.922106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.044 [2024-11-18 20:37:30.922183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.044 [2024-11-18 20:37:30.922208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.044 [2024-11-18 20:37:30.922222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.044 [2024-11-18 20:37:30.922235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.044 [2024-11-18 20:37:30.922265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.044 qpair failed and we were unable to recover it. 00:36:19.044 [2024-11-18 20:37:30.932176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.044 [2024-11-18 20:37:30.932257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.044 [2024-11-18 20:37:30.932287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.044 [2024-11-18 20:37:30.932302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.044 [2024-11-18 20:37:30.932315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.044 [2024-11-18 20:37:30.932345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.044 qpair failed and we were unable to recover it. 00:36:19.044 [2024-11-18 20:37:30.942184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.044 [2024-11-18 20:37:30.942276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.044 [2024-11-18 20:37:30.942301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.044 [2024-11-18 20:37:30.942315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.044 [2024-11-18 20:37:30.942328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.044 [2024-11-18 20:37:30.942371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.044 qpair failed and we were unable to recover it. 00:36:19.044 [2024-11-18 20:37:30.952180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.044 [2024-11-18 20:37:30.952263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.044 [2024-11-18 20:37:30.952288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.044 [2024-11-18 20:37:30.952302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.044 [2024-11-18 20:37:30.952314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.044 [2024-11-18 20:37:30.952344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.044 qpair failed and we were unable to recover it. 00:36:19.044 [2024-11-18 20:37:30.962240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.044 [2024-11-18 20:37:30.962321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.044 [2024-11-18 20:37:30.962345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.044 [2024-11-18 20:37:30.962360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.044 [2024-11-18 20:37:30.962372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.044 [2024-11-18 20:37:30.962402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.044 qpair failed and we were unable to recover it. 00:36:19.044 [2024-11-18 20:37:30.972280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.044 [2024-11-18 20:37:30.972360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.044 [2024-11-18 20:37:30.972386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.045 [2024-11-18 20:37:30.972406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.045 [2024-11-18 20:37:30.972419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.045 [2024-11-18 20:37:30.972450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.045 qpair failed and we were unable to recover it. 00:36:19.045 [2024-11-18 20:37:30.982305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.045 [2024-11-18 20:37:30.982394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.045 [2024-11-18 20:37:30.982419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.045 [2024-11-18 20:37:30.982434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.045 [2024-11-18 20:37:30.982447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.045 [2024-11-18 20:37:30.982489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.045 qpair failed and we were unable to recover it. 00:36:19.045 [2024-11-18 20:37:30.992386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.045 [2024-11-18 20:37:30.992484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.045 [2024-11-18 20:37:30.992508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.045 [2024-11-18 20:37:30.992523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.045 [2024-11-18 20:37:30.992537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.045 [2024-11-18 20:37:30.992568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.045 qpair failed and we were unable to recover it. 00:36:19.045 [2024-11-18 20:37:31.002351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.045 [2024-11-18 20:37:31.002435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.045 [2024-11-18 20:37:31.002460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.045 [2024-11-18 20:37:31.002473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.045 [2024-11-18 20:37:31.002487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.045 [2024-11-18 20:37:31.002529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.045 qpair failed and we were unable to recover it. 00:36:19.045 [2024-11-18 20:37:31.012378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.045 [2024-11-18 20:37:31.012514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.045 [2024-11-18 20:37:31.012544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.045 [2024-11-18 20:37:31.012561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.045 [2024-11-18 20:37:31.012575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.045 [2024-11-18 20:37:31.012606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.045 qpair failed and we were unable to recover it. 00:36:19.045 [2024-11-18 20:37:31.022452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.045 [2024-11-18 20:37:31.022545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.045 [2024-11-18 20:37:31.022570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.045 [2024-11-18 20:37:31.022584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.045 [2024-11-18 20:37:31.022597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.045 [2024-11-18 20:37:31.022627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.045 qpair failed and we were unable to recover it. 00:36:19.045 [2024-11-18 20:37:31.032433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.045 [2024-11-18 20:37:31.032525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.045 [2024-11-18 20:37:31.032550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.045 [2024-11-18 20:37:31.032564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.045 [2024-11-18 20:37:31.032576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.045 [2024-11-18 20:37:31.032606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.045 qpair failed and we were unable to recover it. 00:36:19.045 [2024-11-18 20:37:31.042474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.045 [2024-11-18 20:37:31.042557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.045 [2024-11-18 20:37:31.042582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.045 [2024-11-18 20:37:31.042596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.045 [2024-11-18 20:37:31.042608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.045 [2024-11-18 20:37:31.042646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.045 qpair failed and we were unable to recover it. 00:36:19.307 [2024-11-18 20:37:31.052548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.307 [2024-11-18 20:37:31.052673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.307 [2024-11-18 20:37:31.052698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.307 [2024-11-18 20:37:31.052712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.307 [2024-11-18 20:37:31.052725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.307 [2024-11-18 20:37:31.052756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.307 qpair failed and we were unable to recover it. 00:36:19.307 [2024-11-18 20:37:31.062525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.307 [2024-11-18 20:37:31.062621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.307 [2024-11-18 20:37:31.062651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.307 [2024-11-18 20:37:31.062676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.307 [2024-11-18 20:37:31.062689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.307 [2024-11-18 20:37:31.062720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.307 qpair failed and we were unable to recover it. 00:36:19.307 [2024-11-18 20:37:31.072564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.307 [2024-11-18 20:37:31.072656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.307 [2024-11-18 20:37:31.072690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.307 [2024-11-18 20:37:31.072705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.307 [2024-11-18 20:37:31.072717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.307 [2024-11-18 20:37:31.072748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.307 qpair failed and we were unable to recover it. 00:36:19.307 [2024-11-18 20:37:31.082570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.307 [2024-11-18 20:37:31.082667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.307 [2024-11-18 20:37:31.082696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.307 [2024-11-18 20:37:31.082710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.307 [2024-11-18 20:37:31.082722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.307 [2024-11-18 20:37:31.082752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.307 qpair failed and we were unable to recover it. 00:36:19.307 [2024-11-18 20:37:31.092741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.307 [2024-11-18 20:37:31.092842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.307 [2024-11-18 20:37:31.092867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.307 [2024-11-18 20:37:31.092881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.307 [2024-11-18 20:37:31.092909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.307 [2024-11-18 20:37:31.092939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.307 qpair failed and we were unable to recover it. 00:36:19.307 [2024-11-18 20:37:31.102662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.307 [2024-11-18 20:37:31.102759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.307 [2024-11-18 20:37:31.102783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.307 [2024-11-18 20:37:31.102811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.307 [2024-11-18 20:37:31.102825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.307 [2024-11-18 20:37:31.102856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.307 qpair failed and we were unable to recover it. 00:36:19.307 [2024-11-18 20:37:31.112878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.307 [2024-11-18 20:37:31.112972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.307 [2024-11-18 20:37:31.112997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.307 [2024-11-18 20:37:31.113012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.307 [2024-11-18 20:37:31.113025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.307 [2024-11-18 20:37:31.113054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.307 qpair failed and we were unable to recover it. 00:36:19.307 [2024-11-18 20:37:31.122807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.307 [2024-11-18 20:37:31.122905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.307 [2024-11-18 20:37:31.122934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.307 [2024-11-18 20:37:31.122950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.307 [2024-11-18 20:37:31.122963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.307 [2024-11-18 20:37:31.122993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.307 qpair failed and we were unable to recover it. 00:36:19.307 [2024-11-18 20:37:31.132793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.307 [2024-11-18 20:37:31.132884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.307 [2024-11-18 20:37:31.132912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.307 [2024-11-18 20:37:31.132927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.308 [2024-11-18 20:37:31.132940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.308 [2024-11-18 20:37:31.132970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.308 qpair failed and we were unable to recover it. 00:36:19.308 [2024-11-18 20:37:31.142824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.308 [2024-11-18 20:37:31.142918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.308 [2024-11-18 20:37:31.142942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.308 [2024-11-18 20:37:31.142956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.308 [2024-11-18 20:37:31.142968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.308 [2024-11-18 20:37:31.143004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.308 qpair failed and we were unable to recover it. 00:36:19.308 [2024-11-18 20:37:31.152803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.308 [2024-11-18 20:37:31.152894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.308 [2024-11-18 20:37:31.152919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.308 [2024-11-18 20:37:31.152934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.308 [2024-11-18 20:37:31.152947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.308 [2024-11-18 20:37:31.152977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.308 qpair failed and we were unable to recover it. 00:36:19.308 [2024-11-18 20:37:31.162811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.308 [2024-11-18 20:37:31.162908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.308 [2024-11-18 20:37:31.162936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.308 [2024-11-18 20:37:31.162951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.308 [2024-11-18 20:37:31.162965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.308 [2024-11-18 20:37:31.162996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.308 qpair failed and we were unable to recover it. 00:36:19.308 [2024-11-18 20:37:31.172914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.308 [2024-11-18 20:37:31.172996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.308 [2024-11-18 20:37:31.173021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.308 [2024-11-18 20:37:31.173036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.308 [2024-11-18 20:37:31.173048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.308 [2024-11-18 20:37:31.173078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.308 qpair failed and we were unable to recover it. 00:36:19.308 [2024-11-18 20:37:31.182918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.308 [2024-11-18 20:37:31.183059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.308 [2024-11-18 20:37:31.183084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.308 [2024-11-18 20:37:31.183099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.308 [2024-11-18 20:37:31.183111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.308 [2024-11-18 20:37:31.183141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.308 qpair failed and we were unable to recover it. 00:36:19.308 [2024-11-18 20:37:31.192897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.308 [2024-11-18 20:37:31.192982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.308 [2024-11-18 20:37:31.193006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.308 [2024-11-18 20:37:31.193021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.308 [2024-11-18 20:37:31.193033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.308 [2024-11-18 20:37:31.193063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.308 qpair failed and we were unable to recover it. 00:36:19.308 [2024-11-18 20:37:31.202922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.308 [2024-11-18 20:37:31.203044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.308 [2024-11-18 20:37:31.203068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.308 [2024-11-18 20:37:31.203083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.308 [2024-11-18 20:37:31.203095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.308 [2024-11-18 20:37:31.203126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.308 qpair failed and we were unable to recover it. 00:36:19.308 [2024-11-18 20:37:31.212938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.308 [2024-11-18 20:37:31.213023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.308 [2024-11-18 20:37:31.213047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.308 [2024-11-18 20:37:31.213062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.308 [2024-11-18 20:37:31.213074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.308 [2024-11-18 20:37:31.213105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.308 qpair failed and we were unable to recover it. 00:36:19.308 [2024-11-18 20:37:31.223050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.308 [2024-11-18 20:37:31.223147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.308 [2024-11-18 20:37:31.223172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.308 [2024-11-18 20:37:31.223187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.308 [2024-11-18 20:37:31.223199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.308 [2024-11-18 20:37:31.223230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.308 qpair failed and we were unable to recover it. 00:36:19.308 [2024-11-18 20:37:31.233020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.308 [2024-11-18 20:37:31.233109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.308 [2024-11-18 20:37:31.233139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.308 [2024-11-18 20:37:31.233155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.308 [2024-11-18 20:37:31.233168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.308 [2024-11-18 20:37:31.233198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.308 qpair failed and we were unable to recover it. 00:36:19.308 [2024-11-18 20:37:31.243018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.308 [2024-11-18 20:37:31.243101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.308 [2024-11-18 20:37:31.243126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.308 [2024-11-18 20:37:31.243141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.308 [2024-11-18 20:37:31.243153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.308 [2024-11-18 20:37:31.243183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.308 qpair failed and we were unable to recover it. 00:36:19.308 [2024-11-18 20:37:31.253110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.308 [2024-11-18 20:37:31.253199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.308 [2024-11-18 20:37:31.253224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.308 [2024-11-18 20:37:31.253239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.308 [2024-11-18 20:37:31.253251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.308 [2024-11-18 20:37:31.253281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.308 qpair failed and we were unable to recover it. 00:36:19.308 [2024-11-18 20:37:31.263162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.308 [2024-11-18 20:37:31.263293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.309 [2024-11-18 20:37:31.263318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.309 [2024-11-18 20:37:31.263332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.309 [2024-11-18 20:37:31.263345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.309 [2024-11-18 20:37:31.263390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.309 qpair failed and we were unable to recover it. 00:36:19.309 [2024-11-18 20:37:31.273135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.309 [2024-11-18 20:37:31.273222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.309 [2024-11-18 20:37:31.273246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.309 [2024-11-18 20:37:31.273261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.309 [2024-11-18 20:37:31.273279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.309 [2024-11-18 20:37:31.273310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.309 qpair failed and we were unable to recover it. 00:36:19.309 [2024-11-18 20:37:31.283193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.309 [2024-11-18 20:37:31.283313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.309 [2024-11-18 20:37:31.283342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.309 [2024-11-18 20:37:31.283358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.309 [2024-11-18 20:37:31.283372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.309 [2024-11-18 20:37:31.283402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.309 qpair failed and we were unable to recover it. 00:36:19.309 [2024-11-18 20:37:31.293193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.309 [2024-11-18 20:37:31.293306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.309 [2024-11-18 20:37:31.293333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.309 [2024-11-18 20:37:31.293351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.309 [2024-11-18 20:37:31.293364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.309 [2024-11-18 20:37:31.293396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.309 qpair failed and we were unable to recover it. 00:36:19.309 [2024-11-18 20:37:31.303209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.309 [2024-11-18 20:37:31.303303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.309 [2024-11-18 20:37:31.303328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.309 [2024-11-18 20:37:31.303343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.309 [2024-11-18 20:37:31.303355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.309 [2024-11-18 20:37:31.303386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.309 qpair failed and we were unable to recover it. 00:36:19.309 [2024-11-18 20:37:31.313208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.571 [2024-11-18 20:37:31.313299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.571 [2024-11-18 20:37:31.313328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.571 [2024-11-18 20:37:31.313344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.571 [2024-11-18 20:37:31.313358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.571 [2024-11-18 20:37:31.313389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.571 qpair failed and we were unable to recover it. 00:36:19.571 [2024-11-18 20:37:31.323323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.571 [2024-11-18 20:37:31.323405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.571 [2024-11-18 20:37:31.323430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.571 [2024-11-18 20:37:31.323445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.571 [2024-11-18 20:37:31.323458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.571 [2024-11-18 20:37:31.323488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.571 qpair failed and we were unable to recover it. 00:36:19.571 [2024-11-18 20:37:31.333278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.571 [2024-11-18 20:37:31.333361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.571 [2024-11-18 20:37:31.333387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.571 [2024-11-18 20:37:31.333402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.571 [2024-11-18 20:37:31.333415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.571 [2024-11-18 20:37:31.333445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.571 qpair failed and we were unable to recover it. 00:36:19.571 [2024-11-18 20:37:31.343323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.571 [2024-11-18 20:37:31.343412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.571 [2024-11-18 20:37:31.343437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.571 [2024-11-18 20:37:31.343451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.571 [2024-11-18 20:37:31.343463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.571 [2024-11-18 20:37:31.343493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.571 qpair failed and we were unable to recover it. 00:36:19.571 [2024-11-18 20:37:31.353416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.571 [2024-11-18 20:37:31.353504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.571 [2024-11-18 20:37:31.353529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.571 [2024-11-18 20:37:31.353543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.571 [2024-11-18 20:37:31.353556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.571 [2024-11-18 20:37:31.353587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.571 qpair failed and we were unable to recover it. 00:36:19.571 [2024-11-18 20:37:31.363361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.571 [2024-11-18 20:37:31.363450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.571 [2024-11-18 20:37:31.363481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.571 [2024-11-18 20:37:31.363495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.571 [2024-11-18 20:37:31.363508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.571 [2024-11-18 20:37:31.363538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.571 qpair failed and we were unable to recover it. 00:36:19.571 [2024-11-18 20:37:31.373407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.571 [2024-11-18 20:37:31.373490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.571 [2024-11-18 20:37:31.373515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.571 [2024-11-18 20:37:31.373529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.571 [2024-11-18 20:37:31.373542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.571 [2024-11-18 20:37:31.373572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.571 qpair failed and we were unable to recover it. 00:36:19.571 [2024-11-18 20:37:31.383468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.571 [2024-11-18 20:37:31.383574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.571 [2024-11-18 20:37:31.383601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.571 [2024-11-18 20:37:31.383615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.571 [2024-11-18 20:37:31.383627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.572 [2024-11-18 20:37:31.383667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.572 qpair failed and we were unable to recover it. 00:36:19.572 [2024-11-18 20:37:31.393458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.572 [2024-11-18 20:37:31.393542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.572 [2024-11-18 20:37:31.393567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.572 [2024-11-18 20:37:31.393581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.572 [2024-11-18 20:37:31.393594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.572 [2024-11-18 20:37:31.393624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.572 qpair failed and we were unable to recover it. 00:36:19.572 [2024-11-18 20:37:31.403520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.572 [2024-11-18 20:37:31.403645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.572 [2024-11-18 20:37:31.403670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.572 [2024-11-18 20:37:31.403685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.572 [2024-11-18 20:37:31.403703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.572 [2024-11-18 20:37:31.403735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.572 qpair failed and we were unable to recover it. 00:36:19.572 [2024-11-18 20:37:31.413563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.572 [2024-11-18 20:37:31.413668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.572 [2024-11-18 20:37:31.413693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.572 [2024-11-18 20:37:31.413707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.572 [2024-11-18 20:37:31.413719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.572 [2024-11-18 20:37:31.413750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.572 qpair failed and we were unable to recover it. 00:36:19.572 [2024-11-18 20:37:31.423533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.572 [2024-11-18 20:37:31.423629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.572 [2024-11-18 20:37:31.423662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.572 [2024-11-18 20:37:31.423687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.572 [2024-11-18 20:37:31.423700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.572 [2024-11-18 20:37:31.423730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.572 qpair failed and we were unable to recover it. 00:36:19.572 [2024-11-18 20:37:31.433552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.572 [2024-11-18 20:37:31.433649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.572 [2024-11-18 20:37:31.433674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.572 [2024-11-18 20:37:31.433688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.572 [2024-11-18 20:37:31.433701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.572 [2024-11-18 20:37:31.433730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.572 qpair failed and we were unable to recover it. 00:36:19.572 [2024-11-18 20:37:31.443590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.572 [2024-11-18 20:37:31.443683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.572 [2024-11-18 20:37:31.443708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.572 [2024-11-18 20:37:31.443722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.572 [2024-11-18 20:37:31.443735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.572 [2024-11-18 20:37:31.443765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.572 qpair failed and we were unable to recover it. 00:36:19.572 [2024-11-18 20:37:31.453607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.572 [2024-11-18 20:37:31.453701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.572 [2024-11-18 20:37:31.453726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.572 [2024-11-18 20:37:31.453740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.572 [2024-11-18 20:37:31.453752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.572 [2024-11-18 20:37:31.453783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.572 qpair failed and we were unable to recover it. 00:36:19.572 [2024-11-18 20:37:31.463741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.572 [2024-11-18 20:37:31.463835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.572 [2024-11-18 20:37:31.463862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.572 [2024-11-18 20:37:31.463877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.572 [2024-11-18 20:37:31.463890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.572 [2024-11-18 20:37:31.463921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.572 qpair failed and we were unable to recover it. 00:36:19.572 [2024-11-18 20:37:31.473734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.572 [2024-11-18 20:37:31.473822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.572 [2024-11-18 20:37:31.473847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.572 [2024-11-18 20:37:31.473861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.572 [2024-11-18 20:37:31.473874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.572 [2024-11-18 20:37:31.473905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.572 qpair failed and we were unable to recover it. 00:36:19.572 [2024-11-18 20:37:31.483720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.572 [2024-11-18 20:37:31.483805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.572 [2024-11-18 20:37:31.483833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.572 [2024-11-18 20:37:31.483848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.572 [2024-11-18 20:37:31.483861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.572 [2024-11-18 20:37:31.483903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.572 qpair failed and we were unable to recover it. 00:36:19.572 [2024-11-18 20:37:31.493727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.572 [2024-11-18 20:37:31.493836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.572 [2024-11-18 20:37:31.493868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.572 [2024-11-18 20:37:31.493884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.572 [2024-11-18 20:37:31.493896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.572 [2024-11-18 20:37:31.493926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.572 qpair failed and we were unable to recover it. 00:36:19.572 [2024-11-18 20:37:31.503746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.572 [2024-11-18 20:37:31.503839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.572 [2024-11-18 20:37:31.503863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.572 [2024-11-18 20:37:31.503877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.572 [2024-11-18 20:37:31.503889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.573 [2024-11-18 20:37:31.503920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.573 qpair failed and we were unable to recover it. 00:36:19.573 [2024-11-18 20:37:31.513772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.573 [2024-11-18 20:37:31.513860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.573 [2024-11-18 20:37:31.513884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.573 [2024-11-18 20:37:31.513899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.573 [2024-11-18 20:37:31.513912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.573 [2024-11-18 20:37:31.513941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.573 qpair failed and we were unable to recover it. 00:36:19.573 [2024-11-18 20:37:31.523801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.573 [2024-11-18 20:37:31.523894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.573 [2024-11-18 20:37:31.523918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.573 [2024-11-18 20:37:31.523932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.573 [2024-11-18 20:37:31.523945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.573 [2024-11-18 20:37:31.523974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.573 qpair failed and we were unable to recover it. 00:36:19.573 [2024-11-18 20:37:31.533848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.573 [2024-11-18 20:37:31.533931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.573 [2024-11-18 20:37:31.533956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.573 [2024-11-18 20:37:31.533979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.573 [2024-11-18 20:37:31.533992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.573 [2024-11-18 20:37:31.534022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.573 qpair failed and we were unable to recover it. 00:36:19.573 [2024-11-18 20:37:31.543927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.573 [2024-11-18 20:37:31.544018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.573 [2024-11-18 20:37:31.544043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.573 [2024-11-18 20:37:31.544057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.573 [2024-11-18 20:37:31.544070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.573 [2024-11-18 20:37:31.544100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.573 qpair failed and we were unable to recover it. 00:36:19.573 [2024-11-18 20:37:31.553928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.573 [2024-11-18 20:37:31.554017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.573 [2024-11-18 20:37:31.554041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.573 [2024-11-18 20:37:31.554055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.573 [2024-11-18 20:37:31.554068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.573 [2024-11-18 20:37:31.554097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.573 qpair failed and we were unable to recover it. 00:36:19.573 [2024-11-18 20:37:31.563957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.573 [2024-11-18 20:37:31.564073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.573 [2024-11-18 20:37:31.564099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.573 [2024-11-18 20:37:31.564114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.573 [2024-11-18 20:37:31.564126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.573 [2024-11-18 20:37:31.564157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.573 qpair failed and we were unable to recover it. 00:36:19.573 [2024-11-18 20:37:31.573953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.573 [2024-11-18 20:37:31.574038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.573 [2024-11-18 20:37:31.574063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.573 [2024-11-18 20:37:31.574077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.573 [2024-11-18 20:37:31.574090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.573 [2024-11-18 20:37:31.574125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.573 qpair failed and we were unable to recover it. 00:36:19.884 [2024-11-18 20:37:31.583996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.884 [2024-11-18 20:37:31.584132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.884 [2024-11-18 20:37:31.584159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.884 [2024-11-18 20:37:31.584174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.884 [2024-11-18 20:37:31.584186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.884 [2024-11-18 20:37:31.584216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.884 qpair failed and we were unable to recover it. 00:36:19.884 [2024-11-18 20:37:31.594009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.884 [2024-11-18 20:37:31.594095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.885 [2024-11-18 20:37:31.594120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.885 [2024-11-18 20:37:31.594134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.885 [2024-11-18 20:37:31.594147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.885 [2024-11-18 20:37:31.594189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.885 qpair failed and we were unable to recover it. 00:36:19.885 [2024-11-18 20:37:31.604036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.885 [2024-11-18 20:37:31.604119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.885 [2024-11-18 20:37:31.604143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.885 [2024-11-18 20:37:31.604157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.885 [2024-11-18 20:37:31.604170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.885 [2024-11-18 20:37:31.604200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.885 qpair failed and we were unable to recover it. 00:36:19.885 [2024-11-18 20:37:31.614154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.885 [2024-11-18 20:37:31.614232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.885 [2024-11-18 20:37:31.614256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.885 [2024-11-18 20:37:31.614270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.885 [2024-11-18 20:37:31.614282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.885 [2024-11-18 20:37:31.614312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.885 qpair failed and we were unable to recover it. 00:36:19.885 [2024-11-18 20:37:31.624167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.885 [2024-11-18 20:37:31.624264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.885 [2024-11-18 20:37:31.624289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.885 [2024-11-18 20:37:31.624303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.885 [2024-11-18 20:37:31.624317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.885 [2024-11-18 20:37:31.624347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.885 qpair failed and we were unable to recover it. 00:36:19.885 [2024-11-18 20:37:31.634157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.885 [2024-11-18 20:37:31.634256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.885 [2024-11-18 20:37:31.634281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.885 [2024-11-18 20:37:31.634295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.885 [2024-11-18 20:37:31.634308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.885 [2024-11-18 20:37:31.634338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.885 qpair failed and we were unable to recover it. 00:36:19.885 [2024-11-18 20:37:31.644166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.885 [2024-11-18 20:37:31.644255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.885 [2024-11-18 20:37:31.644279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.885 [2024-11-18 20:37:31.644294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.885 [2024-11-18 20:37:31.644306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.885 [2024-11-18 20:37:31.644336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.885 qpair failed and we were unable to recover it. 00:36:19.885 [2024-11-18 20:37:31.654194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.885 [2024-11-18 20:37:31.654282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.885 [2024-11-18 20:37:31.654307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.885 [2024-11-18 20:37:31.654321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.885 [2024-11-18 20:37:31.654333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.885 [2024-11-18 20:37:31.654363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.885 qpair failed and we were unable to recover it. 00:36:19.885 [2024-11-18 20:37:31.664245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.885 [2024-11-18 20:37:31.664334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.885 [2024-11-18 20:37:31.664358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.885 [2024-11-18 20:37:31.664378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.885 [2024-11-18 20:37:31.664391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.885 [2024-11-18 20:37:31.664421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.885 qpair failed and we were unable to recover it. 00:36:19.885 [2024-11-18 20:37:31.674237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.885 [2024-11-18 20:37:31.674336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.885 [2024-11-18 20:37:31.674360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.885 [2024-11-18 20:37:31.674375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.885 [2024-11-18 20:37:31.674387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.885 [2024-11-18 20:37:31.674417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.885 qpair failed and we were unable to recover it. 00:36:19.885 [2024-11-18 20:37:31.684264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.885 [2024-11-18 20:37:31.684366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.885 [2024-11-18 20:37:31.684392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.885 [2024-11-18 20:37:31.684407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.885 [2024-11-18 20:37:31.684419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.885 [2024-11-18 20:37:31.684449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.885 qpair failed and we were unable to recover it. 00:36:19.885 [2024-11-18 20:37:31.694306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.885 [2024-11-18 20:37:31.694430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.885 [2024-11-18 20:37:31.694456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.886 [2024-11-18 20:37:31.694470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.886 [2024-11-18 20:37:31.694483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.886 [2024-11-18 20:37:31.694513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.886 qpair failed and we were unable to recover it. 00:36:19.886 [2024-11-18 20:37:31.704403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.886 [2024-11-18 20:37:31.704495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.886 [2024-11-18 20:37:31.704519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.886 [2024-11-18 20:37:31.704533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.886 [2024-11-18 20:37:31.704545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.886 [2024-11-18 20:37:31.704581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.886 qpair failed and we were unable to recover it. 00:36:19.886 [2024-11-18 20:37:31.714420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.886 [2024-11-18 20:37:31.714505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.886 [2024-11-18 20:37:31.714529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.886 [2024-11-18 20:37:31.714544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.886 [2024-11-18 20:37:31.714557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.886 [2024-11-18 20:37:31.714587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.886 qpair failed and we were unable to recover it. 00:36:19.886 [2024-11-18 20:37:31.724463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.886 [2024-11-18 20:37:31.724546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.886 [2024-11-18 20:37:31.724570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.886 [2024-11-18 20:37:31.724584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.886 [2024-11-18 20:37:31.724597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.886 [2024-11-18 20:37:31.724626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.886 qpair failed and we were unable to recover it. 00:36:19.886 [2024-11-18 20:37:31.734423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.886 [2024-11-18 20:37:31.734505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.886 [2024-11-18 20:37:31.734530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.886 [2024-11-18 20:37:31.734544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.886 [2024-11-18 20:37:31.734556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.886 [2024-11-18 20:37:31.734586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.886 qpair failed and we were unable to recover it. 00:36:19.886 [2024-11-18 20:37:31.744550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.886 [2024-11-18 20:37:31.744655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.886 [2024-11-18 20:37:31.744679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.886 [2024-11-18 20:37:31.744694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.886 [2024-11-18 20:37:31.744706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:19.886 [2024-11-18 20:37:31.744736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.886 qpair failed and we were unable to recover it. 00:36:19.886 [2024-11-18 20:37:31.744775] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:19.886 A controller has encountered a failure and is being reset. 00:36:19.886 [2024-11-18 20:37:31.754491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.886 [2024-11-18 20:37:31.754581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.886 [2024-11-18 20:37:31.754611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.886 [2024-11-18 20:37:31.754626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.886 [2024-11-18 20:37:31.754648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:19.886 [2024-11-18 20:37:31.754681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:19.886 qpair failed and we were unable to recover it. 00:36:19.886 [2024-11-18 20:37:31.764505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.886 [2024-11-18 20:37:31.764596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.886 [2024-11-18 20:37:31.764621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.886 [2024-11-18 20:37:31.764644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.886 [2024-11-18 20:37:31.764659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:19.886 [2024-11-18 20:37:31.764688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:19.886 qpair failed and we were unable to recover it. 00:36:19.886 [2024-11-18 20:37:31.774560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.886 [2024-11-18 20:37:31.774682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.886 [2024-11-18 20:37:31.774710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.886 [2024-11-18 20:37:31.774724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.886 [2024-11-18 20:37:31.774737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:19.886 [2024-11-18 20:37:31.774766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:19.886 qpair failed and we were unable to recover it. 00:36:19.886 [2024-11-18 20:37:31.784579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.886 [2024-11-18 20:37:31.784668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.886 [2024-11-18 20:37:31.784693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.886 [2024-11-18 20:37:31.784707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.886 [2024-11-18 20:37:31.784719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:19.886 [2024-11-18 20:37:31.784748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:19.886 qpair failed and we were unable to recover it. 00:36:19.886 [2024-11-18 20:37:31.794590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.886 [2024-11-18 20:37:31.794697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.886 [2024-11-18 20:37:31.794723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.886 [2024-11-18 20:37:31.794737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.886 [2024-11-18 20:37:31.794750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:19.886 [2024-11-18 20:37:31.794778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:19.886 qpair failed and we were unable to recover it. 00:36:19.886 [2024-11-18 20:37:31.804682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.886 [2024-11-18 20:37:31.804769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.886 [2024-11-18 20:37:31.804794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.886 [2024-11-18 20:37:31.804808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.886 [2024-11-18 20:37:31.804820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:19.886 [2024-11-18 20:37:31.804849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:19.887 qpair failed and we were unable to recover it. 00:36:19.887 [2024-11-18 20:37:31.814658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.887 [2024-11-18 20:37:31.814749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.887 [2024-11-18 20:37:31.814774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.887 [2024-11-18 20:37:31.814788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.887 [2024-11-18 20:37:31.814800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:19.887 [2024-11-18 20:37:31.814829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:19.887 qpair failed and we were unable to recover it. 00:36:19.887 [2024-11-18 20:37:31.824689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.887 [2024-11-18 20:37:31.824778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.887 [2024-11-18 20:37:31.824801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.887 [2024-11-18 20:37:31.824816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.887 [2024-11-18 20:37:31.824829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:19.887 [2024-11-18 20:37:31.824857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:19.887 qpair failed and we were unable to recover it. 00:36:19.887 [2024-11-18 20:37:31.834714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.887 [2024-11-18 20:37:31.834811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.887 [2024-11-18 20:37:31.834835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.887 [2024-11-18 20:37:31.834855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.887 [2024-11-18 20:37:31.834869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:19.887 [2024-11-18 20:37:31.834897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:19.887 qpair failed and we were unable to recover it. 00:36:19.887 [2024-11-18 20:37:31.844741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.887 [2024-11-18 20:37:31.844828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.887 [2024-11-18 20:37:31.844852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.887 [2024-11-18 20:37:31.844866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.887 [2024-11-18 20:37:31.844879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:19.887 [2024-11-18 20:37:31.844907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:19.887 qpair failed and we were unable to recover it. 00:36:19.887 [2024-11-18 20:37:31.854754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.887 [2024-11-18 20:37:31.854836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.887 [2024-11-18 20:37:31.854861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.887 [2024-11-18 20:37:31.854877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.887 [2024-11-18 20:37:31.854890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:19.887 [2024-11-18 20:37:31.854919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:19.887 qpair failed and we were unable to recover it. 00:36:19.887 [2024-11-18 20:37:31.864854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.887 [2024-11-18 20:37:31.864961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.887 [2024-11-18 20:37:31.864989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.887 [2024-11-18 20:37:31.865004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.887 [2024-11-18 20:37:31.865017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:19.887 [2024-11-18 20:37:31.865046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:19.887 qpair failed and we were unable to recover it. 00:36:20.147 [2024-11-18 20:37:31.874858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.147 [2024-11-18 20:37:31.874956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.147 [2024-11-18 20:37:31.874981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.147 [2024-11-18 20:37:31.874995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.147 [2024-11-18 20:37:31.875008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.147 [2024-11-18 20:37:31.875037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.147 qpair failed and we were unable to recover it. 00:36:20.147 [2024-11-18 20:37:31.884889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.147 [2024-11-18 20:37:31.884973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.147 [2024-11-18 20:37:31.885001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.147 [2024-11-18 20:37:31.885018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.147 [2024-11-18 20:37:31.885031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.147 [2024-11-18 20:37:31.885060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.147 qpair failed and we were unable to recover it. 00:36:20.147 [2024-11-18 20:37:31.894913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.147 [2024-11-18 20:37:31.894999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.147 [2024-11-18 20:37:31.895024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.147 [2024-11-18 20:37:31.895039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.147 [2024-11-18 20:37:31.895052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.147 [2024-11-18 20:37:31.895081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.147 qpair failed and we were unable to recover it. 00:36:20.147 [2024-11-18 20:37:31.904928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.147 [2024-11-18 20:37:31.905059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.147 [2024-11-18 20:37:31.905084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.147 [2024-11-18 20:37:31.905099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.147 [2024-11-18 20:37:31.905111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.147 [2024-11-18 20:37:31.905140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.147 qpair failed and we were unable to recover it. 00:36:20.147 [2024-11-18 20:37:31.914957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.147 [2024-11-18 20:37:31.915041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.147 [2024-11-18 20:37:31.915067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.147 [2024-11-18 20:37:31.915081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.147 [2024-11-18 20:37:31.915093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.147 [2024-11-18 20:37:31.915122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.147 qpair failed and we were unable to recover it. 00:36:20.147 [2024-11-18 20:37:31.924948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.147 [2024-11-18 20:37:31.925040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.147 [2024-11-18 20:37:31.925065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.147 [2024-11-18 20:37:31.925080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.147 [2024-11-18 20:37:31.925092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.147 [2024-11-18 20:37:31.925121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.147 qpair failed and we were unable to recover it. 00:36:20.147 [2024-11-18 20:37:31.934997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.147 [2024-11-18 20:37:31.935082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.147 [2024-11-18 20:37:31.935106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.147 [2024-11-18 20:37:31.935120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.147 [2024-11-18 20:37:31.935133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.147 [2024-11-18 20:37:31.935162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.147 qpair failed and we were unable to recover it. 00:36:20.147 [2024-11-18 20:37:31.945159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.147 [2024-11-18 20:37:31.945268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.147 [2024-11-18 20:37:31.945294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.147 [2024-11-18 20:37:31.945322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.147 [2024-11-18 20:37:31.945335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.147 [2024-11-18 20:37:31.945364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.147 qpair failed and we were unable to recover it. 00:36:20.147 [2024-11-18 20:37:31.955131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.147 [2024-11-18 20:37:31.955217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.147 [2024-11-18 20:37:31.955241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.147 [2024-11-18 20:37:31.955255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.147 [2024-11-18 20:37:31.955268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.147 [2024-11-18 20:37:31.955296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.147 qpair failed and we were unable to recover it. 00:36:20.147 [2024-11-18 20:37:31.965111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.147 [2024-11-18 20:37:31.965228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.147 [2024-11-18 20:37:31.965252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.147 [2024-11-18 20:37:31.965272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.147 [2024-11-18 20:37:31.965285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.147 [2024-11-18 20:37:31.965313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.147 qpair failed and we were unable to recover it. 00:36:20.147 [2024-11-18 20:37:31.975118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.148 [2024-11-18 20:37:31.975201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.148 [2024-11-18 20:37:31.975226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.148 [2024-11-18 20:37:31.975240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.148 [2024-11-18 20:37:31.975252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.148 [2024-11-18 20:37:31.975280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.148 qpair failed and we were unable to recover it. 00:36:20.148 [2024-11-18 20:37:31.985163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.148 [2024-11-18 20:37:31.985275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.148 [2024-11-18 20:37:31.985301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.148 [2024-11-18 20:37:31.985316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.148 [2024-11-18 20:37:31.985328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.148 [2024-11-18 20:37:31.985357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.148 qpair failed and we were unable to recover it. 00:36:20.148 [2024-11-18 20:37:31.995230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.148 [2024-11-18 20:37:31.995320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.148 [2024-11-18 20:37:31.995344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.148 [2024-11-18 20:37:31.995358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.148 [2024-11-18 20:37:31.995371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.148 [2024-11-18 20:37:31.995400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.148 qpair failed and we were unable to recover it. 00:36:20.148 [2024-11-18 20:37:32.005215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.148 [2024-11-18 20:37:32.005303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.148 [2024-11-18 20:37:32.005327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.148 [2024-11-18 20:37:32.005341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.148 [2024-11-18 20:37:32.005354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.148 [2024-11-18 20:37:32.005388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.148 qpair failed and we were unable to recover it. 00:36:20.148 [2024-11-18 20:37:32.015198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.148 [2024-11-18 20:37:32.015288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.148 [2024-11-18 20:37:32.015313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.148 [2024-11-18 20:37:32.015327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.148 [2024-11-18 20:37:32.015339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.148 [2024-11-18 20:37:32.015369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.148 qpair failed and we were unable to recover it. 00:36:20.148 [2024-11-18 20:37:32.025257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.148 [2024-11-18 20:37:32.025390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.148 [2024-11-18 20:37:32.025416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.148 [2024-11-18 20:37:32.025431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.148 [2024-11-18 20:37:32.025443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.148 [2024-11-18 20:37:32.025471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.148 qpair failed and we were unable to recover it. 00:36:20.148 [2024-11-18 20:37:32.035314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.148 [2024-11-18 20:37:32.035434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.148 [2024-11-18 20:37:32.035461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.148 [2024-11-18 20:37:32.035475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.148 [2024-11-18 20:37:32.035488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.148 [2024-11-18 20:37:32.035516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.148 qpair failed and we were unable to recover it. 00:36:20.148 [2024-11-18 20:37:32.045312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.148 [2024-11-18 20:37:32.045428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.148 [2024-11-18 20:37:32.045458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.148 [2024-11-18 20:37:32.045479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.148 [2024-11-18 20:37:32.045492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.148 [2024-11-18 20:37:32.045522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.148 qpair failed and we were unable to recover it. 00:36:20.148 [2024-11-18 20:37:32.055372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.148 [2024-11-18 20:37:32.055495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.148 [2024-11-18 20:37:32.055522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.148 [2024-11-18 20:37:32.055538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.148 [2024-11-18 20:37:32.055550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.148 [2024-11-18 20:37:32.055578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.148 qpair failed and we were unable to recover it. 00:36:20.148 [2024-11-18 20:37:32.065398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.148 [2024-11-18 20:37:32.065498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.148 [2024-11-18 20:37:32.065523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.148 [2024-11-18 20:37:32.065538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.148 [2024-11-18 20:37:32.065550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.148 [2024-11-18 20:37:32.065578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.148 qpair failed and we were unable to recover it. 00:36:20.148 [2024-11-18 20:37:32.075390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.148 [2024-11-18 20:37:32.075506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.148 [2024-11-18 20:37:32.075534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.148 [2024-11-18 20:37:32.075549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.148 [2024-11-18 20:37:32.075562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.148 [2024-11-18 20:37:32.075591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.148 qpair failed and we were unable to recover it. 00:36:20.148 [2024-11-18 20:37:32.085410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.148 [2024-11-18 20:37:32.085499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.148 [2024-11-18 20:37:32.085524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.148 [2024-11-18 20:37:32.085538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.148 [2024-11-18 20:37:32.085551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.148 [2024-11-18 20:37:32.085580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.148 qpair failed and we were unable to recover it. 00:36:20.148 [2024-11-18 20:37:32.095465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.148 [2024-11-18 20:37:32.095588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.148 [2024-11-18 20:37:32.095614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.148 [2024-11-18 20:37:32.095648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.148 [2024-11-18 20:37:32.095663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.148 [2024-11-18 20:37:32.095692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.148 qpair failed and we were unable to recover it. 00:36:20.148 [2024-11-18 20:37:32.105485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.149 [2024-11-18 20:37:32.105579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.149 [2024-11-18 20:37:32.105604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.149 [2024-11-18 20:37:32.105618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.149 [2024-11-18 20:37:32.105630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.149 [2024-11-18 20:37:32.105669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.149 qpair failed and we were unable to recover it. 00:36:20.149 [2024-11-18 20:37:32.115582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.149 [2024-11-18 20:37:32.115676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.149 [2024-11-18 20:37:32.115702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.149 [2024-11-18 20:37:32.115718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.149 [2024-11-18 20:37:32.115730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.149 [2024-11-18 20:37:32.115759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.149 qpair failed and we were unable to recover it. 00:36:20.149 [2024-11-18 20:37:32.125547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.149 [2024-11-18 20:37:32.125628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.149 [2024-11-18 20:37:32.125659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.149 [2024-11-18 20:37:32.125674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.149 [2024-11-18 20:37:32.125687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.149 [2024-11-18 20:37:32.125716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.149 qpair failed and we were unable to recover it. 00:36:20.149 [2024-11-18 20:37:32.135562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.149 [2024-11-18 20:37:32.135665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.149 [2024-11-18 20:37:32.135690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.149 [2024-11-18 20:37:32.135704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.149 [2024-11-18 20:37:32.135717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.149 [2024-11-18 20:37:32.135751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.149 qpair failed and we were unable to recover it. 00:36:20.149 [2024-11-18 20:37:32.145621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.149 [2024-11-18 20:37:32.145768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.149 [2024-11-18 20:37:32.145794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.149 [2024-11-18 20:37:32.145810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.149 [2024-11-18 20:37:32.145823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.149 [2024-11-18 20:37:32.145853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.149 qpair failed and we were unable to recover it. 00:36:20.408 [2024-11-18 20:37:32.155643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.409 [2024-11-18 20:37:32.155735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.409 [2024-11-18 20:37:32.155761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.409 [2024-11-18 20:37:32.155776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.409 [2024-11-18 20:37:32.155789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.409 [2024-11-18 20:37:32.155818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.409 qpair failed and we were unable to recover it. 00:36:20.409 [2024-11-18 20:37:32.165734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.409 [2024-11-18 20:37:32.165869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.409 [2024-11-18 20:37:32.165897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.409 [2024-11-18 20:37:32.165912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.409 [2024-11-18 20:37:32.165935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.409 [2024-11-18 20:37:32.165964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.409 qpair failed and we were unable to recover it. 00:36:20.409 [2024-11-18 20:37:32.175690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.409 [2024-11-18 20:37:32.175778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.409 [2024-11-18 20:37:32.175803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.409 [2024-11-18 20:37:32.175817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.409 [2024-11-18 20:37:32.175830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.409 [2024-11-18 20:37:32.175859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.409 qpair failed and we were unable to recover it. 00:36:20.409 [2024-11-18 20:37:32.185732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.409 [2024-11-18 20:37:32.185835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.409 [2024-11-18 20:37:32.185862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.409 [2024-11-18 20:37:32.185876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.409 [2024-11-18 20:37:32.185888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.409 [2024-11-18 20:37:32.185917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.409 qpair failed and we were unable to recover it. 00:36:20.409 [2024-11-18 20:37:32.195795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.409 [2024-11-18 20:37:32.195905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.409 [2024-11-18 20:37:32.195942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.409 [2024-11-18 20:37:32.195957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.409 [2024-11-18 20:37:32.195969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.409 [2024-11-18 20:37:32.195998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.409 qpair failed and we were unable to recover it. 00:36:20.409 [2024-11-18 20:37:32.205794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.409 [2024-11-18 20:37:32.205880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.409 [2024-11-18 20:37:32.205906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.409 [2024-11-18 20:37:32.205919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.409 [2024-11-18 20:37:32.205935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.409 [2024-11-18 20:37:32.205963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.409 qpair failed and we were unable to recover it. 00:36:20.409 [2024-11-18 20:37:32.215789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.409 [2024-11-18 20:37:32.215909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.409 [2024-11-18 20:37:32.215943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.409 [2024-11-18 20:37:32.215959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.409 [2024-11-18 20:37:32.215971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.409 [2024-11-18 20:37:32.215999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.409 qpair failed and we were unable to recover it. 00:36:20.409 [2024-11-18 20:37:32.225883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.409 [2024-11-18 20:37:32.225973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.409 [2024-11-18 20:37:32.226000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.409 [2024-11-18 20:37:32.226020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.409 [2024-11-18 20:37:32.226033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.409 [2024-11-18 20:37:32.226063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.409 qpair failed and we were unable to recover it. 00:36:20.409 [2024-11-18 20:37:32.235849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.409 [2024-11-18 20:37:32.235943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.409 [2024-11-18 20:37:32.235967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.409 [2024-11-18 20:37:32.235982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.409 [2024-11-18 20:37:32.235994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.409 [2024-11-18 20:37:32.236023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.409 qpair failed and we were unable to recover it. 00:36:20.409 [2024-11-18 20:37:32.245913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.409 [2024-11-18 20:37:32.245997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.409 [2024-11-18 20:37:32.246022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.409 [2024-11-18 20:37:32.246037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.409 [2024-11-18 20:37:32.246049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.409 [2024-11-18 20:37:32.246077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.409 qpair failed and we were unable to recover it. 00:36:20.409 [2024-11-18 20:37:32.255921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.410 [2024-11-18 20:37:32.256044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.410 [2024-11-18 20:37:32.256069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.410 [2024-11-18 20:37:32.256083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.410 [2024-11-18 20:37:32.256096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.410 [2024-11-18 20:37:32.256124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.410 qpair failed and we were unable to recover it. 00:36:20.410 [2024-11-18 20:37:32.265944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.410 [2024-11-18 20:37:32.266032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.410 [2024-11-18 20:37:32.266056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.410 [2024-11-18 20:37:32.266071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.410 [2024-11-18 20:37:32.266083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.410 [2024-11-18 20:37:32.266116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.410 qpair failed and we were unable to recover it. 00:36:20.410 [2024-11-18 20:37:32.276029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.410 [2024-11-18 20:37:32.276125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.410 [2024-11-18 20:37:32.276152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.410 [2024-11-18 20:37:32.276167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.410 [2024-11-18 20:37:32.276179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.410 [2024-11-18 20:37:32.276207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.410 qpair failed and we were unable to recover it. 00:36:20.410 [2024-11-18 20:37:32.285991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.410 [2024-11-18 20:37:32.286117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.410 [2024-11-18 20:37:32.286141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.410 [2024-11-18 20:37:32.286155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.410 [2024-11-18 20:37:32.286167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.410 [2024-11-18 20:37:32.286196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.410 qpair failed and we were unable to recover it. 00:36:20.410 [2024-11-18 20:37:32.296043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.410 [2024-11-18 20:37:32.296163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.410 [2024-11-18 20:37:32.296188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.410 [2024-11-18 20:37:32.296203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.410 [2024-11-18 20:37:32.296215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.410 [2024-11-18 20:37:32.296244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.410 qpair failed and we were unable to recover it. 00:36:20.410 [2024-11-18 20:37:32.306115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.410 [2024-11-18 20:37:32.306211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.410 [2024-11-18 20:37:32.306235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.410 [2024-11-18 20:37:32.306250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.410 [2024-11-18 20:37:32.306262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.410 [2024-11-18 20:37:32.306290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.410 qpair failed and we were unable to recover it. 00:36:20.410 [2024-11-18 20:37:32.316136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.410 [2024-11-18 20:37:32.316239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.410 [2024-11-18 20:37:32.316265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.410 [2024-11-18 20:37:32.316280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.410 [2024-11-18 20:37:32.316292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.410 [2024-11-18 20:37:32.316321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.410 qpair failed and we were unable to recover it. 00:36:20.410 [2024-11-18 20:37:32.326167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.410 [2024-11-18 20:37:32.326282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.410 [2024-11-18 20:37:32.326309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.410 [2024-11-18 20:37:32.326324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.410 [2024-11-18 20:37:32.326336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.410 [2024-11-18 20:37:32.326379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.410 qpair failed and we were unable to recover it. 00:36:20.410 [2024-11-18 20:37:32.336142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.410 [2024-11-18 20:37:32.336224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.410 [2024-11-18 20:37:32.336248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.410 [2024-11-18 20:37:32.336262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.410 [2024-11-18 20:37:32.336275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.410 [2024-11-18 20:37:32.336303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.410 qpair failed and we were unable to recover it. 00:36:20.410 [2024-11-18 20:37:32.346199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.410 [2024-11-18 20:37:32.346331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.410 [2024-11-18 20:37:32.346361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.410 [2024-11-18 20:37:32.346377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.410 [2024-11-18 20:37:32.346404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.410 [2024-11-18 20:37:32.346434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.410 qpair failed and we were unable to recover it. 00:36:20.410 [2024-11-18 20:37:32.356220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.410 [2024-11-18 20:37:32.356350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.410 [2024-11-18 20:37:32.356376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.410 [2024-11-18 20:37:32.356396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.410 [2024-11-18 20:37:32.356409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.410 [2024-11-18 20:37:32.356438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.410 qpair failed and we were unable to recover it. 00:36:20.410 [2024-11-18 20:37:32.366344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.410 [2024-11-18 20:37:32.366435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.411 [2024-11-18 20:37:32.366459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.411 [2024-11-18 20:37:32.366474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.411 [2024-11-18 20:37:32.366486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.411 [2024-11-18 20:37:32.366515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.411 qpair failed and we were unable to recover it. 00:36:20.411 [2024-11-18 20:37:32.376269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.411 [2024-11-18 20:37:32.376359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.411 [2024-11-18 20:37:32.376383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.411 [2024-11-18 20:37:32.376398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.411 [2024-11-18 20:37:32.376410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.411 [2024-11-18 20:37:32.376439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.411 qpair failed and we were unable to recover it. 00:36:20.411 [2024-11-18 20:37:32.386306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.411 [2024-11-18 20:37:32.386407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.411 [2024-11-18 20:37:32.386433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.411 [2024-11-18 20:37:32.386448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.411 [2024-11-18 20:37:32.386460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.411 [2024-11-18 20:37:32.386499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.411 qpair failed and we were unable to recover it. 00:36:20.411 [2024-11-18 20:37:32.396327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.411 [2024-11-18 20:37:32.396432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.411 [2024-11-18 20:37:32.396460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.411 [2024-11-18 20:37:32.396474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.411 [2024-11-18 20:37:32.396487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.411 [2024-11-18 20:37:32.396520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.411 qpair failed and we were unable to recover it. 00:36:20.411 [2024-11-18 20:37:32.406372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.411 [2024-11-18 20:37:32.406496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.411 [2024-11-18 20:37:32.406520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.411 [2024-11-18 20:37:32.406535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.411 [2024-11-18 20:37:32.406548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.411 [2024-11-18 20:37:32.406577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.411 qpair failed and we were unable to recover it. 00:36:20.672 [2024-11-18 20:37:32.416400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.672 [2024-11-18 20:37:32.416489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.672 [2024-11-18 20:37:32.416514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.672 [2024-11-18 20:37:32.416529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.672 [2024-11-18 20:37:32.416542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.672 [2024-11-18 20:37:32.416571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.672 qpair failed and we were unable to recover it. 00:36:20.673 [2024-11-18 20:37:32.426441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.673 [2024-11-18 20:37:32.426588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.673 [2024-11-18 20:37:32.426617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.673 [2024-11-18 20:37:32.426634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.673 [2024-11-18 20:37:32.426661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.673 [2024-11-18 20:37:32.426693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.673 qpair failed and we were unable to recover it. 00:36:20.673 [2024-11-18 20:37:32.436459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.673 [2024-11-18 20:37:32.436572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.673 [2024-11-18 20:37:32.436598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.673 [2024-11-18 20:37:32.436613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.673 [2024-11-18 20:37:32.436626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.673 [2024-11-18 20:37:32.436663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.673 qpair failed and we were unable to recover it. 00:36:20.673 [2024-11-18 20:37:32.446465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.673 [2024-11-18 20:37:32.446562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.673 [2024-11-18 20:37:32.446587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.673 [2024-11-18 20:37:32.446601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.673 [2024-11-18 20:37:32.446613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.673 [2024-11-18 20:37:32.446649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.673 qpair failed and we were unable to recover it. 00:36:20.673 [2024-11-18 20:37:32.456503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.673 [2024-11-18 20:37:32.456589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.673 [2024-11-18 20:37:32.456613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.673 [2024-11-18 20:37:32.456627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.673 [2024-11-18 20:37:32.456648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.673 [2024-11-18 20:37:32.456678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.673 qpair failed and we were unable to recover it. 00:36:20.673 [2024-11-18 20:37:32.466517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.673 [2024-11-18 20:37:32.466612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.673 [2024-11-18 20:37:32.466643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.673 [2024-11-18 20:37:32.466659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.673 [2024-11-18 20:37:32.466672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.673 [2024-11-18 20:37:32.466701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.673 qpair failed and we were unable to recover it. 00:36:20.673 [2024-11-18 20:37:32.476558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.673 [2024-11-18 20:37:32.476667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.673 [2024-11-18 20:37:32.476694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.673 [2024-11-18 20:37:32.476708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.673 [2024-11-18 20:37:32.476720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.673 [2024-11-18 20:37:32.476749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.673 qpair failed and we were unable to recover it. 00:36:20.673 [2024-11-18 20:37:32.486632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.673 [2024-11-18 20:37:32.486778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.673 [2024-11-18 20:37:32.486804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.673 [2024-11-18 20:37:32.486828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.673 [2024-11-18 20:37:32.486841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.673 [2024-11-18 20:37:32.486870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.673 qpair failed and we were unable to recover it. 00:36:20.673 [2024-11-18 20:37:32.496584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.673 [2024-11-18 20:37:32.496681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.673 [2024-11-18 20:37:32.496705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.673 [2024-11-18 20:37:32.496719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.673 [2024-11-18 20:37:32.496731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.673 [2024-11-18 20:37:32.496760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.673 qpair failed and we were unable to recover it. 00:36:20.673 [2024-11-18 20:37:32.506668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.673 [2024-11-18 20:37:32.506763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.673 [2024-11-18 20:37:32.506788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.673 [2024-11-18 20:37:32.506802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.673 [2024-11-18 20:37:32.506814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.673 [2024-11-18 20:37:32.506843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.673 qpair failed and we were unable to recover it. 00:36:20.673 [2024-11-18 20:37:32.516676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.673 [2024-11-18 20:37:32.516777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.673 [2024-11-18 20:37:32.516803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.673 [2024-11-18 20:37:32.516818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.673 [2024-11-18 20:37:32.516830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.673 [2024-11-18 20:37:32.516859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.673 qpair failed and we were unable to recover it. 00:36:20.673 [2024-11-18 20:37:32.526677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.673 [2024-11-18 20:37:32.526771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.673 [2024-11-18 20:37:32.526794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.673 [2024-11-18 20:37:32.526809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.673 [2024-11-18 20:37:32.526821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.673 [2024-11-18 20:37:32.526855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.673 qpair failed and we were unable to recover it. 00:36:20.673 [2024-11-18 20:37:32.536721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.673 [2024-11-18 20:37:32.536810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.673 [2024-11-18 20:37:32.536834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.673 [2024-11-18 20:37:32.536848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.673 [2024-11-18 20:37:32.536860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.673 [2024-11-18 20:37:32.536889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.673 qpair failed and we were unable to recover it. 00:36:20.673 [2024-11-18 20:37:32.546746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.673 [2024-11-18 20:37:32.546841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.673 [2024-11-18 20:37:32.546867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.673 [2024-11-18 20:37:32.546881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.673 [2024-11-18 20:37:32.546894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.673 [2024-11-18 20:37:32.546922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.674 qpair failed and we were unable to recover it. 00:36:20.674 [2024-11-18 20:37:32.556780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.674 [2024-11-18 20:37:32.556873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.674 [2024-11-18 20:37:32.556897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.674 [2024-11-18 20:37:32.556912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.674 [2024-11-18 20:37:32.556924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.674 [2024-11-18 20:37:32.556963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.674 qpair failed and we were unable to recover it. 00:36:20.674 [2024-11-18 20:37:32.566835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.674 [2024-11-18 20:37:32.566918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.674 [2024-11-18 20:37:32.566942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.674 [2024-11-18 20:37:32.566956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.674 [2024-11-18 20:37:32.566969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.674 [2024-11-18 20:37:32.567006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.674 qpair failed and we were unable to recover it. 00:36:20.674 [2024-11-18 20:37:32.576849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.674 [2024-11-18 20:37:32.576968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.674 [2024-11-18 20:37:32.576993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.674 [2024-11-18 20:37:32.577007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.674 [2024-11-18 20:37:32.577021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.674 [2024-11-18 20:37:32.577049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.674 qpair failed and we were unable to recover it. 00:36:20.674 [2024-11-18 20:37:32.586880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.674 [2024-11-18 20:37:32.586971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.674 [2024-11-18 20:37:32.586996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.674 [2024-11-18 20:37:32.587010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.674 [2024-11-18 20:37:32.587022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.674 [2024-11-18 20:37:32.587051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.674 qpair failed and we were unable to recover it. 00:36:20.674 [2024-11-18 20:37:32.596889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.674 [2024-11-18 20:37:32.596978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.674 [2024-11-18 20:37:32.597003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.674 [2024-11-18 20:37:32.597017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.674 [2024-11-18 20:37:32.597030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.674 [2024-11-18 20:37:32.597058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.674 qpair failed and we were unable to recover it. 00:36:20.674 [2024-11-18 20:37:32.606971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.674 [2024-11-18 20:37:32.607088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.674 [2024-11-18 20:37:32.607113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.674 [2024-11-18 20:37:32.607128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.674 [2024-11-18 20:37:32.607140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.674 [2024-11-18 20:37:32.607169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.674 qpair failed and we were unable to recover it. 00:36:20.674 [2024-11-18 20:37:32.616963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.674 [2024-11-18 20:37:32.617045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.674 [2024-11-18 20:37:32.617070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.674 [2024-11-18 20:37:32.617090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.674 [2024-11-18 20:37:32.617103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.674 [2024-11-18 20:37:32.617132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.674 qpair failed and we were unable to recover it. 00:36:20.674 [2024-11-18 20:37:32.627126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.674 [2024-11-18 20:37:32.627228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.674 [2024-11-18 20:37:32.627252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.674 [2024-11-18 20:37:32.627267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.674 [2024-11-18 20:37:32.627280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.674 [2024-11-18 20:37:32.627308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.674 qpair failed and we were unable to recover it. 00:36:20.674 [2024-11-18 20:37:32.637049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.674 [2024-11-18 20:37:32.637140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.674 [2024-11-18 20:37:32.637165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.674 [2024-11-18 20:37:32.637179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.674 [2024-11-18 20:37:32.637192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.674 [2024-11-18 20:37:32.637220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.674 qpair failed and we were unable to recover it. 00:36:20.674 [2024-11-18 20:37:32.647026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.674 [2024-11-18 20:37:32.647112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.674 [2024-11-18 20:37:32.647137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.674 [2024-11-18 20:37:32.647151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.674 [2024-11-18 20:37:32.647164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.674 [2024-11-18 20:37:32.647192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.674 qpair failed and we were unable to recover it. 00:36:20.674 [2024-11-18 20:37:32.657079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.674 [2024-11-18 20:37:32.657170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.674 [2024-11-18 20:37:32.657194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.674 [2024-11-18 20:37:32.657209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.674 [2024-11-18 20:37:32.657221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.674 [2024-11-18 20:37:32.657256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.674 qpair failed and we were unable to recover it. 00:36:20.674 [2024-11-18 20:37:32.667204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.674 [2024-11-18 20:37:32.667298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.674 [2024-11-18 20:37:32.667324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.674 [2024-11-18 20:37:32.667340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.674 [2024-11-18 20:37:32.667352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.674 [2024-11-18 20:37:32.667381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.674 qpair failed and we were unable to recover it. 00:36:20.674 [2024-11-18 20:37:32.677134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.674 [2024-11-18 20:37:32.677266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.674 [2024-11-18 20:37:32.677291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.674 [2024-11-18 20:37:32.677306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.674 [2024-11-18 20:37:32.677319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.674 [2024-11-18 20:37:32.677347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.675 qpair failed and we were unable to recover it. 00:36:20.937 [2024-11-18 20:37:32.687146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.937 [2024-11-18 20:37:32.687281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.937 [2024-11-18 20:37:32.687307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.937 [2024-11-18 20:37:32.687322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.937 [2024-11-18 20:37:32.687334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.937 [2024-11-18 20:37:32.687363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.937 qpair failed and we were unable to recover it. 00:36:20.937 [2024-11-18 20:37:32.697266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.937 [2024-11-18 20:37:32.697347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.937 [2024-11-18 20:37:32.697373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.937 [2024-11-18 20:37:32.697388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.937 [2024-11-18 20:37:32.697400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.937 [2024-11-18 20:37:32.697443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.937 qpair failed and we were unable to recover it. 00:36:20.937 [2024-11-18 20:37:32.707280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.937 [2024-11-18 20:37:32.707417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.937 [2024-11-18 20:37:32.707442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.937 [2024-11-18 20:37:32.707457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.937 [2024-11-18 20:37:32.707485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.937 [2024-11-18 20:37:32.707514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.937 qpair failed and we were unable to recover it. 00:36:20.937 [2024-11-18 20:37:32.717319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.937 [2024-11-18 20:37:32.717456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.937 [2024-11-18 20:37:32.717480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.937 [2024-11-18 20:37:32.717495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.937 [2024-11-18 20:37:32.717507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.937 [2024-11-18 20:37:32.717536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.937 qpair failed and we were unable to recover it. 00:36:20.937 [2024-11-18 20:37:32.727360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.937 [2024-11-18 20:37:32.727511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.937 [2024-11-18 20:37:32.727535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.937 [2024-11-18 20:37:32.727549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.937 [2024-11-18 20:37:32.727561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.937 [2024-11-18 20:37:32.727604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.937 qpair failed and we were unable to recover it. 00:36:20.937 [2024-11-18 20:37:32.737306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.937 [2024-11-18 20:37:32.737395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.937 [2024-11-18 20:37:32.737420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.937 [2024-11-18 20:37:32.737434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.937 [2024-11-18 20:37:32.737447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.937 [2024-11-18 20:37:32.737475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.937 qpair failed and we were unable to recover it. 00:36:20.937 [2024-11-18 20:37:32.747364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.937 [2024-11-18 20:37:32.747456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.937 [2024-11-18 20:37:32.747480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.937 [2024-11-18 20:37:32.747500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.937 [2024-11-18 20:37:32.747514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.937 [2024-11-18 20:37:32.747542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.937 qpair failed and we were unable to recover it. 00:36:20.937 [2024-11-18 20:37:32.757351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.937 [2024-11-18 20:37:32.757435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.937 [2024-11-18 20:37:32.757460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.937 [2024-11-18 20:37:32.757474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.937 [2024-11-18 20:37:32.757487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.937 [2024-11-18 20:37:32.757515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.937 qpair failed and we were unable to recover it. 00:36:20.937 [2024-11-18 20:37:32.767407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.937 [2024-11-18 20:37:32.767496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.937 [2024-11-18 20:37:32.767520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.937 [2024-11-18 20:37:32.767535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.937 [2024-11-18 20:37:32.767548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.937 [2024-11-18 20:37:32.767576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.937 qpair failed and we were unable to recover it. 00:36:20.937 [2024-11-18 20:37:32.777404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.937 [2024-11-18 20:37:32.777485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.937 [2024-11-18 20:37:32.777509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.937 [2024-11-18 20:37:32.777524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.937 [2024-11-18 20:37:32.777536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.937 [2024-11-18 20:37:32.777564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.937 qpair failed and we were unable to recover it. 00:36:20.937 [2024-11-18 20:37:32.787463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.937 [2024-11-18 20:37:32.787565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.937 [2024-11-18 20:37:32.787590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.937 [2024-11-18 20:37:32.787604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.937 [2024-11-18 20:37:32.787617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.937 [2024-11-18 20:37:32.787660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.937 qpair failed and we were unable to recover it. 00:36:20.937 [2024-11-18 20:37:32.797497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.937 [2024-11-18 20:37:32.797617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.937 [2024-11-18 20:37:32.797647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.937 [2024-11-18 20:37:32.797664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.937 [2024-11-18 20:37:32.797677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.937 [2024-11-18 20:37:32.797705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.937 qpair failed and we were unable to recover it. 00:36:20.937 [2024-11-18 20:37:32.807571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.938 [2024-11-18 20:37:32.807673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.938 [2024-11-18 20:37:32.807697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.938 [2024-11-18 20:37:32.807711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.938 [2024-11-18 20:37:32.807724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.938 [2024-11-18 20:37:32.807753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.938 qpair failed and we were unable to recover it. 00:36:20.938 [2024-11-18 20:37:32.817624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.938 [2024-11-18 20:37:32.817723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.938 [2024-11-18 20:37:32.817748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.938 [2024-11-18 20:37:32.817763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.938 [2024-11-18 20:37:32.817775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.938 [2024-11-18 20:37:32.817804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.938 qpair failed and we were unable to recover it. 00:36:20.938 [2024-11-18 20:37:32.827630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.938 [2024-11-18 20:37:32.827734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.938 [2024-11-18 20:37:32.827760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.938 [2024-11-18 20:37:32.827774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.938 [2024-11-18 20:37:32.827787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.938 [2024-11-18 20:37:32.827815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.938 qpair failed and we were unable to recover it. 00:36:20.938 [2024-11-18 20:37:32.837649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.938 [2024-11-18 20:37:32.837743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.938 [2024-11-18 20:37:32.837767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.938 [2024-11-18 20:37:32.837782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.938 [2024-11-18 20:37:32.837794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.938 [2024-11-18 20:37:32.837822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.938 qpair failed and we were unable to recover it. 00:36:20.938 [2024-11-18 20:37:32.847719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.938 [2024-11-18 20:37:32.847847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.938 [2024-11-18 20:37:32.847871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.938 [2024-11-18 20:37:32.847886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.938 [2024-11-18 20:37:32.847898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.938 [2024-11-18 20:37:32.847926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.938 qpair failed and we were unable to recover it. 00:36:20.938 [2024-11-18 20:37:32.857679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.938 [2024-11-18 20:37:32.857781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.938 [2024-11-18 20:37:32.857805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.938 [2024-11-18 20:37:32.857820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.938 [2024-11-18 20:37:32.857832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.938 [2024-11-18 20:37:32.857861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.938 qpair failed and we were unable to recover it. 00:36:20.938 [2024-11-18 20:37:32.867680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.938 [2024-11-18 20:37:32.867770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.938 [2024-11-18 20:37:32.867794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.938 [2024-11-18 20:37:32.867809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.938 [2024-11-18 20:37:32.867822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.938 [2024-11-18 20:37:32.867850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.938 qpair failed and we were unable to recover it. 00:36:20.938 [2024-11-18 20:37:32.877727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.938 [2024-11-18 20:37:32.877814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.938 [2024-11-18 20:37:32.877838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.938 [2024-11-18 20:37:32.877859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.938 [2024-11-18 20:37:32.877872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.938 [2024-11-18 20:37:32.877901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.938 qpair failed and we were unable to recover it. 00:36:20.938 [2024-11-18 20:37:32.887782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.938 [2024-11-18 20:37:32.887873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.938 [2024-11-18 20:37:32.887898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.938 [2024-11-18 20:37:32.887912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.938 [2024-11-18 20:37:32.887924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.938 [2024-11-18 20:37:32.887953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.938 qpair failed and we were unable to recover it. 00:36:20.938 [2024-11-18 20:37:32.897798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.938 [2024-11-18 20:37:32.897880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.938 [2024-11-18 20:37:32.897905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.938 [2024-11-18 20:37:32.897919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.938 [2024-11-18 20:37:32.897932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.938 [2024-11-18 20:37:32.897960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.938 qpair failed and we were unable to recover it. 00:36:20.938 [2024-11-18 20:37:32.907928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.938 [2024-11-18 20:37:32.908054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.938 [2024-11-18 20:37:32.908092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.938 [2024-11-18 20:37:32.908106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.938 [2024-11-18 20:37:32.908119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.938 [2024-11-18 20:37:32.908146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.938 qpair failed and we were unable to recover it. 00:36:20.938 [2024-11-18 20:37:32.917858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.938 [2024-11-18 20:37:32.917969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.938 [2024-11-18 20:37:32.917996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.938 [2024-11-18 20:37:32.918011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.938 [2024-11-18 20:37:32.918023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.938 [2024-11-18 20:37:32.918058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.938 qpair failed and we were unable to recover it. 00:36:20.938 [2024-11-18 20:37:32.927861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.938 [2024-11-18 20:37:32.927946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.938 [2024-11-18 20:37:32.927971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.938 [2024-11-18 20:37:32.927986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.938 [2024-11-18 20:37:32.927999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.938 [2024-11-18 20:37:32.928028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.938 qpair failed and we were unable to recover it. 00:36:20.938 [2024-11-18 20:37:32.937906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.939 [2024-11-18 20:37:32.937991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.939 [2024-11-18 20:37:32.938016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.939 [2024-11-18 20:37:32.938031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.939 [2024-11-18 20:37:32.938044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:20.939 [2024-11-18 20:37:32.938073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:20.939 qpair failed and we were unable to recover it. 00:36:21.201 [2024-11-18 20:37:32.947952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.201 [2024-11-18 20:37:32.948043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.201 [2024-11-18 20:37:32.948069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.201 [2024-11-18 20:37:32.948084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.201 [2024-11-18 20:37:32.948097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.201 [2024-11-18 20:37:32.948126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.201 qpair failed and we were unable to recover it. 00:36:21.201 [2024-11-18 20:37:32.957981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.201 [2024-11-18 20:37:32.958073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.201 [2024-11-18 20:37:32.958099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.201 [2024-11-18 20:37:32.958113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.201 [2024-11-18 20:37:32.958126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.201 [2024-11-18 20:37:32.958154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.201 qpair failed and we were unable to recover it. 00:36:21.201 [2024-11-18 20:37:32.968006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.201 [2024-11-18 20:37:32.968096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.201 [2024-11-18 20:37:32.968120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.201 [2024-11-18 20:37:32.968135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.201 [2024-11-18 20:37:32.968147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.201 [2024-11-18 20:37:32.968176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.201 qpair failed and we were unable to recover it. 00:36:21.201 [2024-11-18 20:37:32.978011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.201 [2024-11-18 20:37:32.978107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.201 [2024-11-18 20:37:32.978132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.201 [2024-11-18 20:37:32.978146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.201 [2024-11-18 20:37:32.978159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.201 [2024-11-18 20:37:32.978188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.201 qpair failed and we were unable to recover it. 00:36:21.201 [2024-11-18 20:37:32.988023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.201 [2024-11-18 20:37:32.988113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.201 [2024-11-18 20:37:32.988138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.201 [2024-11-18 20:37:32.988152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.201 [2024-11-18 20:37:32.988165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.201 [2024-11-18 20:37:32.988194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.201 qpair failed and we were unable to recover it. 00:36:21.201 [2024-11-18 20:37:32.998043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.201 [2024-11-18 20:37:32.998174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.201 [2024-11-18 20:37:32.998198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.201 [2024-11-18 20:37:32.998213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.201 [2024-11-18 20:37:32.998226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.201 [2024-11-18 20:37:32.998255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.201 qpair failed and we were unable to recover it. 00:36:21.201 [2024-11-18 20:37:33.008072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.201 [2024-11-18 20:37:33.008156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.201 [2024-11-18 20:37:33.008181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.201 [2024-11-18 20:37:33.008200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.201 [2024-11-18 20:37:33.008213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.201 [2024-11-18 20:37:33.008243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.201 qpair failed and we were unable to recover it. 00:36:21.201 [2024-11-18 20:37:33.018130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.201 [2024-11-18 20:37:33.018213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.201 [2024-11-18 20:37:33.018237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.201 [2024-11-18 20:37:33.018251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.201 [2024-11-18 20:37:33.018264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.201 [2024-11-18 20:37:33.018293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.201 qpair failed and we were unable to recover it. 00:36:21.201 [2024-11-18 20:37:33.028189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.201 [2024-11-18 20:37:33.028299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.201 [2024-11-18 20:37:33.028325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.201 [2024-11-18 20:37:33.028339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.201 [2024-11-18 20:37:33.028352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.201 [2024-11-18 20:37:33.028380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.201 qpair failed and we were unable to recover it. 00:36:21.201 [2024-11-18 20:37:33.038214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.201 [2024-11-18 20:37:33.038330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.201 [2024-11-18 20:37:33.038355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.201 [2024-11-18 20:37:33.038369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.201 [2024-11-18 20:37:33.038382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.201 [2024-11-18 20:37:33.038411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.201 qpair failed and we were unable to recover it. 00:36:21.201 [2024-11-18 20:37:33.048325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.201 [2024-11-18 20:37:33.048416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.202 [2024-11-18 20:37:33.048445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.202 [2024-11-18 20:37:33.048461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.202 [2024-11-18 20:37:33.048473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.202 [2024-11-18 20:37:33.048509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.202 qpair failed and we were unable to recover it. 00:36:21.202 [2024-11-18 20:37:33.058203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.202 [2024-11-18 20:37:33.058285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.202 [2024-11-18 20:37:33.058310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.202 [2024-11-18 20:37:33.058324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.202 [2024-11-18 20:37:33.058336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.202 [2024-11-18 20:37:33.058365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.202 qpair failed and we were unable to recover it. 00:36:21.202 [2024-11-18 20:37:33.068265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.202 [2024-11-18 20:37:33.068399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.202 [2024-11-18 20:37:33.068426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.202 [2024-11-18 20:37:33.068441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.202 [2024-11-18 20:37:33.068454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.202 [2024-11-18 20:37:33.068482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.202 qpair failed and we were unable to recover it. 00:36:21.202 [2024-11-18 20:37:33.078280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.202 [2024-11-18 20:37:33.078365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.202 [2024-11-18 20:37:33.078389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.202 [2024-11-18 20:37:33.078402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.202 [2024-11-18 20:37:33.078421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.202 [2024-11-18 20:37:33.078450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.202 qpair failed and we were unable to recover it. 00:36:21.202 [2024-11-18 20:37:33.088301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.202 [2024-11-18 20:37:33.088386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.202 [2024-11-18 20:37:33.088410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.202 [2024-11-18 20:37:33.088424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.202 [2024-11-18 20:37:33.088436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.202 [2024-11-18 20:37:33.088465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.202 qpair failed and we were unable to recover it. 00:36:21.202 [2024-11-18 20:37:33.098315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.202 [2024-11-18 20:37:33.098398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.202 [2024-11-18 20:37:33.098423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.202 [2024-11-18 20:37:33.098438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.202 [2024-11-18 20:37:33.098451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.202 [2024-11-18 20:37:33.098480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.202 qpair failed and we were unable to recover it. 00:36:21.202 [2024-11-18 20:37:33.108367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.202 [2024-11-18 20:37:33.108457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.202 [2024-11-18 20:37:33.108481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.202 [2024-11-18 20:37:33.108496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.202 [2024-11-18 20:37:33.108509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.202 [2024-11-18 20:37:33.108537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.202 qpair failed and we were unable to recover it. 00:36:21.202 [2024-11-18 20:37:33.118509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.202 [2024-11-18 20:37:33.118610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.202 [2024-11-18 20:37:33.118634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.202 [2024-11-18 20:37:33.118661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.202 [2024-11-18 20:37:33.118674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.202 [2024-11-18 20:37:33.118703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.202 qpair failed and we were unable to recover it. 00:36:21.202 [2024-11-18 20:37:33.128465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.202 [2024-11-18 20:37:33.128547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.202 [2024-11-18 20:37:33.128571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.202 [2024-11-18 20:37:33.128586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.202 [2024-11-18 20:37:33.128598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.202 [2024-11-18 20:37:33.128626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.202 qpair failed and we were unable to recover it. 00:36:21.202 [2024-11-18 20:37:33.138523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.202 [2024-11-18 20:37:33.138651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.202 [2024-11-18 20:37:33.138680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.202 [2024-11-18 20:37:33.138695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.202 [2024-11-18 20:37:33.138708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.202 [2024-11-18 20:37:33.138737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.202 qpair failed and we were unable to recover it. 00:36:21.202 [2024-11-18 20:37:33.148650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.202 [2024-11-18 20:37:33.148741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.202 [2024-11-18 20:37:33.148765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.202 [2024-11-18 20:37:33.148779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.202 [2024-11-18 20:37:33.148792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.202 [2024-11-18 20:37:33.148820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.202 qpair failed and we were unable to recover it. 00:36:21.202 [2024-11-18 20:37:33.158597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.202 [2024-11-18 20:37:33.158691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.202 [2024-11-18 20:37:33.158726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.202 [2024-11-18 20:37:33.158740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.202 [2024-11-18 20:37:33.158752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.202 [2024-11-18 20:37:33.158781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.202 qpair failed and we were unable to recover it. 00:36:21.202 [2024-11-18 20:37:33.168548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.202 [2024-11-18 20:37:33.168653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.202 [2024-11-18 20:37:33.168690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.202 [2024-11-18 20:37:33.168704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.202 [2024-11-18 20:37:33.168717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.202 [2024-11-18 20:37:33.168746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.202 qpair failed and we were unable to recover it. 00:36:21.202 [2024-11-18 20:37:33.178556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.203 [2024-11-18 20:37:33.178647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.203 [2024-11-18 20:37:33.178674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.203 [2024-11-18 20:37:33.178700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.203 [2024-11-18 20:37:33.178713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.203 [2024-11-18 20:37:33.178746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.203 qpair failed and we were unable to recover it. 00:36:21.203 [2024-11-18 20:37:33.188591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.203 [2024-11-18 20:37:33.188694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.203 [2024-11-18 20:37:33.188719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.203 [2024-11-18 20:37:33.188733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.203 [2024-11-18 20:37:33.188746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.203 [2024-11-18 20:37:33.188774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.203 qpair failed and we were unable to recover it. 00:36:21.203 [2024-11-18 20:37:33.198662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.203 [2024-11-18 20:37:33.198753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.203 [2024-11-18 20:37:33.198777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.203 [2024-11-18 20:37:33.198791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.203 [2024-11-18 20:37:33.198803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.203 [2024-11-18 20:37:33.198833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.203 qpair failed and we were unable to recover it. 00:36:21.464 [2024-11-18 20:37:33.208687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.464 [2024-11-18 20:37:33.208813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.464 [2024-11-18 20:37:33.208840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.465 [2024-11-18 20:37:33.208856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.465 [2024-11-18 20:37:33.208869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.465 [2024-11-18 20:37:33.208898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.465 qpair failed and we were unable to recover it. 00:36:21.465 [2024-11-18 20:37:33.218663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.465 [2024-11-18 20:37:33.218775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.465 [2024-11-18 20:37:33.218800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.465 [2024-11-18 20:37:33.218815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.465 [2024-11-18 20:37:33.218828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.465 [2024-11-18 20:37:33.218857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.465 qpair failed and we were unable to recover it. 00:36:21.465 [2024-11-18 20:37:33.228777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.465 [2024-11-18 20:37:33.228910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.465 [2024-11-18 20:37:33.228937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.465 [2024-11-18 20:37:33.228952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.465 [2024-11-18 20:37:33.228964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.465 [2024-11-18 20:37:33.229008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.465 qpair failed and we were unable to recover it. 00:36:21.465 [2024-11-18 20:37:33.238722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.465 [2024-11-18 20:37:33.238806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.465 [2024-11-18 20:37:33.238831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.465 [2024-11-18 20:37:33.238846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.465 [2024-11-18 20:37:33.238859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.465 [2024-11-18 20:37:33.238887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.465 qpair failed and we were unable to recover it. 00:36:21.465 [2024-11-18 20:37:33.248786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.465 [2024-11-18 20:37:33.248872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.465 [2024-11-18 20:37:33.248897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.465 [2024-11-18 20:37:33.248912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.465 [2024-11-18 20:37:33.248924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.465 [2024-11-18 20:37:33.248952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.465 qpair failed and we were unable to recover it. 00:36:21.465 [2024-11-18 20:37:33.258810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.465 [2024-11-18 20:37:33.258894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.465 [2024-11-18 20:37:33.258918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.465 [2024-11-18 20:37:33.258932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.465 [2024-11-18 20:37:33.258945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.465 [2024-11-18 20:37:33.258973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.465 qpair failed and we were unable to recover it. 00:36:21.465 [2024-11-18 20:37:33.268844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.465 [2024-11-18 20:37:33.268939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.465 [2024-11-18 20:37:33.268972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.465 [2024-11-18 20:37:33.268987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.465 [2024-11-18 20:37:33.268999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.465 [2024-11-18 20:37:33.269028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.465 qpair failed and we were unable to recover it. 00:36:21.465 [2024-11-18 20:37:33.278844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.465 [2024-11-18 20:37:33.278936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.465 [2024-11-18 20:37:33.278960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.465 [2024-11-18 20:37:33.278975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.465 [2024-11-18 20:37:33.278988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.465 [2024-11-18 20:37:33.279017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.465 qpair failed and we were unable to recover it. 00:36:21.465 [2024-11-18 20:37:33.288942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.465 [2024-11-18 20:37:33.289030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.465 [2024-11-18 20:37:33.289054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.465 [2024-11-18 20:37:33.289068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.465 [2024-11-18 20:37:33.289081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.465 [2024-11-18 20:37:33.289109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.465 qpair failed and we were unable to recover it. 00:36:21.465 [2024-11-18 20:37:33.298932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.465 [2024-11-18 20:37:33.299057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.465 [2024-11-18 20:37:33.299083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.465 [2024-11-18 20:37:33.299098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.465 [2024-11-18 20:37:33.299111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.465 [2024-11-18 20:37:33.299140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.465 qpair failed and we were unable to recover it. 00:36:21.465 [2024-11-18 20:37:33.308973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.465 [2024-11-18 20:37:33.309097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.465 [2024-11-18 20:37:33.309123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.465 [2024-11-18 20:37:33.309138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.465 [2024-11-18 20:37:33.309150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.465 [2024-11-18 20:37:33.309183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.465 qpair failed and we were unable to recover it. 00:36:21.465 [2024-11-18 20:37:33.318973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.465 [2024-11-18 20:37:33.319062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.465 [2024-11-18 20:37:33.319087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.465 [2024-11-18 20:37:33.319101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.465 [2024-11-18 20:37:33.319113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.465 [2024-11-18 20:37:33.319142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.465 qpair failed and we were unable to recover it. 00:36:21.465 [2024-11-18 20:37:33.329032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.465 [2024-11-18 20:37:33.329117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.465 [2024-11-18 20:37:33.329142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.465 [2024-11-18 20:37:33.329157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.465 [2024-11-18 20:37:33.329169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.465 [2024-11-18 20:37:33.329198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.465 qpair failed and we were unable to recover it. 00:36:21.465 [2024-11-18 20:37:33.339039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.465 [2024-11-18 20:37:33.339152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.466 [2024-11-18 20:37:33.339177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.466 [2024-11-18 20:37:33.339191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.466 [2024-11-18 20:37:33.339204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.466 [2024-11-18 20:37:33.339233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.466 qpair failed and we were unable to recover it. 00:36:21.466 [2024-11-18 20:37:33.349109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.466 [2024-11-18 20:37:33.349214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.466 [2024-11-18 20:37:33.349238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.466 [2024-11-18 20:37:33.349252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.466 [2024-11-18 20:37:33.349266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.466 [2024-11-18 20:37:33.349295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.466 qpair failed and we were unable to recover it. 00:36:21.466 [2024-11-18 20:37:33.359178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.466 [2024-11-18 20:37:33.359266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.466 [2024-11-18 20:37:33.359290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.466 [2024-11-18 20:37:33.359305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.466 [2024-11-18 20:37:33.359332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.466 [2024-11-18 20:37:33.359361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.466 qpair failed and we were unable to recover it. 00:36:21.466 [2024-11-18 20:37:33.369191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.466 [2024-11-18 20:37:33.369282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.466 [2024-11-18 20:37:33.369308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.466 [2024-11-18 20:37:33.369322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.466 [2024-11-18 20:37:33.369335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.466 [2024-11-18 20:37:33.369364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.466 qpair failed and we were unable to recover it. 00:36:21.466 [2024-11-18 20:37:33.379135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.466 [2024-11-18 20:37:33.379233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.466 [2024-11-18 20:37:33.379258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.466 [2024-11-18 20:37:33.379272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.466 [2024-11-18 20:37:33.379284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.466 [2024-11-18 20:37:33.379314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.466 qpair failed and we were unable to recover it. 00:36:21.466 [2024-11-18 20:37:33.389213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.466 [2024-11-18 20:37:33.389340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.466 [2024-11-18 20:37:33.389364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.466 [2024-11-18 20:37:33.389379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.466 [2024-11-18 20:37:33.389392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.466 [2024-11-18 20:37:33.389420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.466 qpair failed and we were unable to recover it. 00:36:21.466 [2024-11-18 20:37:33.399195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.466 [2024-11-18 20:37:33.399287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.466 [2024-11-18 20:37:33.399317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.466 [2024-11-18 20:37:33.399332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.466 [2024-11-18 20:37:33.399345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.466 [2024-11-18 20:37:33.399373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.466 qpair failed and we were unable to recover it. 00:36:21.466 [2024-11-18 20:37:33.409276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.466 [2024-11-18 20:37:33.409380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.466 [2024-11-18 20:37:33.409404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.466 [2024-11-18 20:37:33.409419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.466 [2024-11-18 20:37:33.409431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.466 [2024-11-18 20:37:33.409460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.466 qpair failed and we were unable to recover it. 00:36:21.466 [2024-11-18 20:37:33.419246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.466 [2024-11-18 20:37:33.419379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.466 [2024-11-18 20:37:33.419404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.466 [2024-11-18 20:37:33.419418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.466 [2024-11-18 20:37:33.419431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.466 [2024-11-18 20:37:33.419460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.466 qpair failed and we were unable to recover it. 00:36:21.466 [2024-11-18 20:37:33.429314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.466 [2024-11-18 20:37:33.429403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.466 [2024-11-18 20:37:33.429428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.466 [2024-11-18 20:37:33.429443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.466 [2024-11-18 20:37:33.429456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.466 [2024-11-18 20:37:33.429484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.466 qpair failed and we were unable to recover it. 00:36:21.466 [2024-11-18 20:37:33.439379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.466 [2024-11-18 20:37:33.439464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.466 [2024-11-18 20:37:33.439489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.466 [2024-11-18 20:37:33.439503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.466 [2024-11-18 20:37:33.439516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.466 [2024-11-18 20:37:33.439550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.466 qpair failed and we were unable to recover it. 00:36:21.466 [2024-11-18 20:37:33.449354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.466 [2024-11-18 20:37:33.449459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.466 [2024-11-18 20:37:33.449485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.466 [2024-11-18 20:37:33.449499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.466 [2024-11-18 20:37:33.449511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.466 [2024-11-18 20:37:33.449540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.466 qpair failed and we were unable to recover it. 00:36:21.466 [2024-11-18 20:37:33.459367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.466 [2024-11-18 20:37:33.459453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.466 [2024-11-18 20:37:33.459478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.466 [2024-11-18 20:37:33.459493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.466 [2024-11-18 20:37:33.459506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.466 [2024-11-18 20:37:33.459534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.466 qpair failed and we were unable to recover it. 00:36:21.466 [2024-11-18 20:37:33.469433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.466 [2024-11-18 20:37:33.469527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.467 [2024-11-18 20:37:33.469552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.467 [2024-11-18 20:37:33.469567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.467 [2024-11-18 20:37:33.469579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.467 [2024-11-18 20:37:33.469608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.467 qpair failed and we were unable to recover it. 00:36:21.727 [2024-11-18 20:37:33.479419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.727 [2024-11-18 20:37:33.479509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.727 [2024-11-18 20:37:33.479534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.727 [2024-11-18 20:37:33.479549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.727 [2024-11-18 20:37:33.479561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.727 [2024-11-18 20:37:33.479590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.727 qpair failed and we were unable to recover it. 00:36:21.727 [2024-11-18 20:37:33.489515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.727 [2024-11-18 20:37:33.489614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.727 [2024-11-18 20:37:33.489646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.727 [2024-11-18 20:37:33.489663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.727 [2024-11-18 20:37:33.489676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.727 [2024-11-18 20:37:33.489705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.727 qpair failed and we were unable to recover it. 00:36:21.727 [2024-11-18 20:37:33.499512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.727 [2024-11-18 20:37:33.499632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.727 [2024-11-18 20:37:33.499667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.727 [2024-11-18 20:37:33.499681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.727 [2024-11-18 20:37:33.499694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.727 [2024-11-18 20:37:33.499723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.727 qpair failed and we were unable to recover it. 00:36:21.727 [2024-11-18 20:37:33.509610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.727 [2024-11-18 20:37:33.509712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.727 [2024-11-18 20:37:33.509737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.727 [2024-11-18 20:37:33.509751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.727 [2024-11-18 20:37:33.509764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.727 [2024-11-18 20:37:33.509792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.727 qpair failed and we were unable to recover it. 00:36:21.727 [2024-11-18 20:37:33.519568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.727 [2024-11-18 20:37:33.519655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.727 [2024-11-18 20:37:33.519681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.727 [2024-11-18 20:37:33.519696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.727 [2024-11-18 20:37:33.519708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.727 [2024-11-18 20:37:33.519737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.727 qpair failed and we were unable to recover it. 00:36:21.727 [2024-11-18 20:37:33.529586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.727 [2024-11-18 20:37:33.529678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.727 [2024-11-18 20:37:33.529708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.727 [2024-11-18 20:37:33.529723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.727 [2024-11-18 20:37:33.529736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.727 [2024-11-18 20:37:33.529765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.727 qpair failed and we were unable to recover it. 00:36:21.727 [2024-11-18 20:37:33.539589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.727 [2024-11-18 20:37:33.539682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.727 [2024-11-18 20:37:33.539707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.727 [2024-11-18 20:37:33.539721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.727 [2024-11-18 20:37:33.539734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.727 [2024-11-18 20:37:33.539763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.727 qpair failed and we were unable to recover it. 00:36:21.727 [2024-11-18 20:37:33.549613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.728 [2024-11-18 20:37:33.549713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.728 [2024-11-18 20:37:33.549738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.728 [2024-11-18 20:37:33.549752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.728 [2024-11-18 20:37:33.549765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.728 [2024-11-18 20:37:33.549793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.728 qpair failed and we were unable to recover it. 00:36:21.728 [2024-11-18 20:37:33.559768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.728 [2024-11-18 20:37:33.559909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.728 [2024-11-18 20:37:33.559935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.728 [2024-11-18 20:37:33.559949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.728 [2024-11-18 20:37:33.559962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.728 [2024-11-18 20:37:33.559991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.728 qpair failed and we were unable to recover it. 00:36:21.728 [2024-11-18 20:37:33.569705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.728 [2024-11-18 20:37:33.569789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.728 [2024-11-18 20:37:33.569812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.728 [2024-11-18 20:37:33.569826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.728 [2024-11-18 20:37:33.569844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.728 [2024-11-18 20:37:33.569873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.728 qpair failed and we were unable to recover it. 00:36:21.728 [2024-11-18 20:37:33.579707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.728 [2024-11-18 20:37:33.579792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.728 [2024-11-18 20:37:33.579817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.728 [2024-11-18 20:37:33.579831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.728 [2024-11-18 20:37:33.579843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.728 [2024-11-18 20:37:33.579871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.728 qpair failed and we were unable to recover it. 00:36:21.728 [2024-11-18 20:37:33.589807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.728 [2024-11-18 20:37:33.589946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.728 [2024-11-18 20:37:33.589972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.728 [2024-11-18 20:37:33.589987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.728 [2024-11-18 20:37:33.589999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.728 [2024-11-18 20:37:33.590028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.728 qpair failed and we were unable to recover it. 00:36:21.728 [2024-11-18 20:37:33.599796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.728 [2024-11-18 20:37:33.599882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.728 [2024-11-18 20:37:33.599906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.728 [2024-11-18 20:37:33.599920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.728 [2024-11-18 20:37:33.599932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.728 [2024-11-18 20:37:33.599960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.728 qpair failed and we were unable to recover it. 00:36:21.728 [2024-11-18 20:37:33.609837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.728 [2024-11-18 20:37:33.609927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.728 [2024-11-18 20:37:33.609951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.728 [2024-11-18 20:37:33.609966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.728 [2024-11-18 20:37:33.609978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.728 [2024-11-18 20:37:33.610006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.728 qpair failed and we were unable to recover it. 00:36:21.728 [2024-11-18 20:37:33.619830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.728 [2024-11-18 20:37:33.619928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.728 [2024-11-18 20:37:33.619951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.728 [2024-11-18 20:37:33.619966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.728 [2024-11-18 20:37:33.619978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.728 [2024-11-18 20:37:33.620007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.728 qpair failed and we were unable to recover it. 00:36:21.728 [2024-11-18 20:37:33.629911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.728 [2024-11-18 20:37:33.630005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.728 [2024-11-18 20:37:33.630029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.728 [2024-11-18 20:37:33.630043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.728 [2024-11-18 20:37:33.630056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.728 [2024-11-18 20:37:33.630084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.728 qpair failed and we were unable to recover it. 00:36:21.728 [2024-11-18 20:37:33.639863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.728 [2024-11-18 20:37:33.639956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.728 [2024-11-18 20:37:33.639980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.728 [2024-11-18 20:37:33.639993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.728 [2024-11-18 20:37:33.640006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.728 [2024-11-18 20:37:33.640035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.728 qpair failed and we were unable to recover it. 00:36:21.728 [2024-11-18 20:37:33.649925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.728 [2024-11-18 20:37:33.650004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.728 [2024-11-18 20:37:33.650028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.728 [2024-11-18 20:37:33.650042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.728 [2024-11-18 20:37:33.650054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.728 [2024-11-18 20:37:33.650083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.728 qpair failed and we were unable to recover it. 00:36:21.728 [2024-11-18 20:37:33.659940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.728 [2024-11-18 20:37:33.660021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.728 [2024-11-18 20:37:33.660049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.728 [2024-11-18 20:37:33.660064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.728 [2024-11-18 20:37:33.660077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.728 [2024-11-18 20:37:33.660105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.728 qpair failed and we were unable to recover it. 00:36:21.728 [2024-11-18 20:37:33.670003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.728 [2024-11-18 20:37:33.670113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.728 [2024-11-18 20:37:33.670140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.728 [2024-11-18 20:37:33.670155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.728 [2024-11-18 20:37:33.670168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.728 [2024-11-18 20:37:33.670197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.728 qpair failed and we were unable to recover it. 00:36:21.728 [2024-11-18 20:37:33.679987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.729 [2024-11-18 20:37:33.680076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.729 [2024-11-18 20:37:33.680100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.729 [2024-11-18 20:37:33.680114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.729 [2024-11-18 20:37:33.680127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.729 [2024-11-18 20:37:33.680156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.729 qpair failed and we were unable to recover it. 00:36:21.729 [2024-11-18 20:37:33.690041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.729 [2024-11-18 20:37:33.690125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.729 [2024-11-18 20:37:33.690149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.729 [2024-11-18 20:37:33.690163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.729 [2024-11-18 20:37:33.690176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.729 [2024-11-18 20:37:33.690204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.729 qpair failed and we were unable to recover it. 00:36:21.729 [2024-11-18 20:37:33.700095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.729 [2024-11-18 20:37:33.700182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.729 [2024-11-18 20:37:33.700207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.729 [2024-11-18 20:37:33.700221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.729 [2024-11-18 20:37:33.700238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.729 [2024-11-18 20:37:33.700267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.729 qpair failed and we were unable to recover it. 00:36:21.729 [2024-11-18 20:37:33.710110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.729 [2024-11-18 20:37:33.710216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.729 [2024-11-18 20:37:33.710240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.729 [2024-11-18 20:37:33.710255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.729 [2024-11-18 20:37:33.710267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.729 [2024-11-18 20:37:33.710294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.729 qpair failed and we were unable to recover it. 00:36:21.729 [2024-11-18 20:37:33.720130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.729 [2024-11-18 20:37:33.720213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.729 [2024-11-18 20:37:33.720237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.729 [2024-11-18 20:37:33.720251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.729 [2024-11-18 20:37:33.720263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.729 [2024-11-18 20:37:33.720292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.729 qpair failed and we were unable to recover it. 00:36:21.729 [2024-11-18 20:37:33.730158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.729 [2024-11-18 20:37:33.730243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.729 [2024-11-18 20:37:33.730268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.729 [2024-11-18 20:37:33.730282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.729 [2024-11-18 20:37:33.730296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.729 [2024-11-18 20:37:33.730334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.729 qpair failed and we were unable to recover it. 00:36:21.989 [2024-11-18 20:37:33.740175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.989 [2024-11-18 20:37:33.740305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.989 [2024-11-18 20:37:33.740333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.989 [2024-11-18 20:37:33.740347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.989 [2024-11-18 20:37:33.740360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.989 [2024-11-18 20:37:33.740389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.989 qpair failed and we were unable to recover it. 00:36:21.989 [2024-11-18 20:37:33.750230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.989 [2024-11-18 20:37:33.750327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.989 [2024-11-18 20:37:33.750351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.989 [2024-11-18 20:37:33.750365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.989 [2024-11-18 20:37:33.750378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.989 [2024-11-18 20:37:33.750407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.989 qpair failed and we were unable to recover it. 00:36:21.989 [2024-11-18 20:37:33.760256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.989 [2024-11-18 20:37:33.760349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.989 [2024-11-18 20:37:33.760373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.989 [2024-11-18 20:37:33.760388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.989 [2024-11-18 20:37:33.760401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.989 [2024-11-18 20:37:33.760429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.989 qpair failed and we were unable to recover it. 00:36:21.989 [2024-11-18 20:37:33.770229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.989 [2024-11-18 20:37:33.770327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.989 [2024-11-18 20:37:33.770351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.989 [2024-11-18 20:37:33.770366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.989 [2024-11-18 20:37:33.770378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.989 [2024-11-18 20:37:33.770407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.989 qpair failed and we were unable to recover it. 00:36:21.989 [2024-11-18 20:37:33.780352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.989 [2024-11-18 20:37:33.780432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.989 [2024-11-18 20:37:33.780457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.989 [2024-11-18 20:37:33.780471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.989 [2024-11-18 20:37:33.780484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.989 [2024-11-18 20:37:33.780512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.989 qpair failed and we were unable to recover it. 00:36:21.989 [2024-11-18 20:37:33.790360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.989 [2024-11-18 20:37:33.790461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.989 [2024-11-18 20:37:33.790490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.989 [2024-11-18 20:37:33.790505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.989 [2024-11-18 20:37:33.790518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.989 [2024-11-18 20:37:33.790546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.989 qpair failed and we were unable to recover it. 00:36:21.989 [2024-11-18 20:37:33.800334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.989 [2024-11-18 20:37:33.800418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.989 [2024-11-18 20:37:33.800442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.989 [2024-11-18 20:37:33.800457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.989 [2024-11-18 20:37:33.800470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.989 [2024-11-18 20:37:33.800498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.989 qpair failed and we were unable to recover it. 00:36:21.989 [2024-11-18 20:37:33.810394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.989 [2024-11-18 20:37:33.810516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.989 [2024-11-18 20:37:33.810542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.989 [2024-11-18 20:37:33.810556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.989 [2024-11-18 20:37:33.810568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.990 [2024-11-18 20:37:33.810597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.990 qpair failed and we were unable to recover it. 00:36:21.990 [2024-11-18 20:37:33.820413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.990 [2024-11-18 20:37:33.820496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.990 [2024-11-18 20:37:33.820520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.990 [2024-11-18 20:37:33.820533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.990 [2024-11-18 20:37:33.820546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.990 [2024-11-18 20:37:33.820575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.990 qpair failed and we were unable to recover it. 00:36:21.990 [2024-11-18 20:37:33.830436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.990 [2024-11-18 20:37:33.830531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.990 [2024-11-18 20:37:33.830555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.990 [2024-11-18 20:37:33.830570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.990 [2024-11-18 20:37:33.830588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.990 [2024-11-18 20:37:33.830617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.990 qpair failed and we were unable to recover it. 00:36:21.990 [2024-11-18 20:37:33.840456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.990 [2024-11-18 20:37:33.840544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.990 [2024-11-18 20:37:33.840569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.990 [2024-11-18 20:37:33.840582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.990 [2024-11-18 20:37:33.840595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.990 [2024-11-18 20:37:33.840623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.990 qpair failed and we were unable to recover it. 00:36:21.990 [2024-11-18 20:37:33.850481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.990 [2024-11-18 20:37:33.850614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.990 [2024-11-18 20:37:33.850648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.990 [2024-11-18 20:37:33.850666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.990 [2024-11-18 20:37:33.850678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.990 [2024-11-18 20:37:33.850706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.990 qpair failed and we were unable to recover it. 00:36:21.990 [2024-11-18 20:37:33.860528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.990 [2024-11-18 20:37:33.860652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.990 [2024-11-18 20:37:33.860678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.990 [2024-11-18 20:37:33.860693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.990 [2024-11-18 20:37:33.860705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.990 [2024-11-18 20:37:33.860734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.990 qpair failed and we were unable to recover it. 00:36:21.990 [2024-11-18 20:37:33.870552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.990 [2024-11-18 20:37:33.870663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.990 [2024-11-18 20:37:33.870688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.990 [2024-11-18 20:37:33.870702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.990 [2024-11-18 20:37:33.870714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.990 [2024-11-18 20:37:33.870743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.990 qpair failed and we were unable to recover it. 00:36:21.990 [2024-11-18 20:37:33.880595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.990 [2024-11-18 20:37:33.880700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.990 [2024-11-18 20:37:33.880725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.990 [2024-11-18 20:37:33.880740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.990 [2024-11-18 20:37:33.880752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.990 [2024-11-18 20:37:33.880780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.990 qpair failed and we were unable to recover it. 00:36:21.990 [2024-11-18 20:37:33.890569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.990 [2024-11-18 20:37:33.890663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.990 [2024-11-18 20:37:33.890687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.990 [2024-11-18 20:37:33.890701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.990 [2024-11-18 20:37:33.890714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.990 [2024-11-18 20:37:33.890742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.990 qpair failed and we were unable to recover it. 00:36:21.990 [2024-11-18 20:37:33.900631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.990 [2024-11-18 20:37:33.900735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.990 [2024-11-18 20:37:33.900759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.990 [2024-11-18 20:37:33.900773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.990 [2024-11-18 20:37:33.900786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.990 [2024-11-18 20:37:33.900814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.990 qpair failed and we were unable to recover it. 00:36:21.990 [2024-11-18 20:37:33.910681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.990 [2024-11-18 20:37:33.910779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.990 [2024-11-18 20:37:33.910803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.990 [2024-11-18 20:37:33.910817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.990 [2024-11-18 20:37:33.910830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.990 [2024-11-18 20:37:33.910859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.990 qpair failed and we were unable to recover it. 00:36:21.990 [2024-11-18 20:37:33.920675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.990 [2024-11-18 20:37:33.920759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.990 [2024-11-18 20:37:33.920792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.990 [2024-11-18 20:37:33.920807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.990 [2024-11-18 20:37:33.920820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.990 [2024-11-18 20:37:33.920849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.990 qpair failed and we were unable to recover it. 00:36:21.990 [2024-11-18 20:37:33.930712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.990 [2024-11-18 20:37:33.930798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.990 [2024-11-18 20:37:33.930822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.990 [2024-11-18 20:37:33.930836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.990 [2024-11-18 20:37:33.930849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.990 [2024-11-18 20:37:33.930878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.990 qpair failed and we were unable to recover it. 00:36:21.990 [2024-11-18 20:37:33.940801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.990 [2024-11-18 20:37:33.940885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.990 [2024-11-18 20:37:33.940908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.990 [2024-11-18 20:37:33.940922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.991 [2024-11-18 20:37:33.940935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.991 [2024-11-18 20:37:33.940964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.991 qpair failed and we were unable to recover it. 00:36:21.991 [2024-11-18 20:37:33.950761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.991 [2024-11-18 20:37:33.950852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.991 [2024-11-18 20:37:33.950876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.991 [2024-11-18 20:37:33.950891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.991 [2024-11-18 20:37:33.950903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.991 [2024-11-18 20:37:33.950931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.991 qpair failed and we were unable to recover it. 00:36:21.991 [2024-11-18 20:37:33.960753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.991 [2024-11-18 20:37:33.960840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.991 [2024-11-18 20:37:33.960865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.991 [2024-11-18 20:37:33.960880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.991 [2024-11-18 20:37:33.960898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.991 [2024-11-18 20:37:33.960928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.991 qpair failed and we were unable to recover it. 00:36:21.991 [2024-11-18 20:37:33.970789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.991 [2024-11-18 20:37:33.970872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.991 [2024-11-18 20:37:33.970897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.991 [2024-11-18 20:37:33.970910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.991 [2024-11-18 20:37:33.970923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.991 [2024-11-18 20:37:33.970951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.991 qpair failed and we were unable to recover it. 00:36:21.991 [2024-11-18 20:37:33.980824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.991 [2024-11-18 20:37:33.980912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.991 [2024-11-18 20:37:33.980938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.991 [2024-11-18 20:37:33.980953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.991 [2024-11-18 20:37:33.980965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.991 [2024-11-18 20:37:33.980993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.991 qpair failed and we were unable to recover it. 00:36:21.991 [2024-11-18 20:37:33.990898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.991 [2024-11-18 20:37:33.991029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.991 [2024-11-18 20:37:33.991055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.991 [2024-11-18 20:37:33.991070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.991 [2024-11-18 20:37:33.991082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:21.991 [2024-11-18 20:37:33.991117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.991 qpair failed and we were unable to recover it. 00:36:22.251 [2024-11-18 20:37:34.000894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.251 [2024-11-18 20:37:34.000980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.251 [2024-11-18 20:37:34.001004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.251 [2024-11-18 20:37:34.001026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.251 [2024-11-18 20:37:34.001042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.251 [2024-11-18 20:37:34.001072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.251 qpair failed and we were unable to recover it. 00:36:22.251 [2024-11-18 20:37:34.010907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.251 [2024-11-18 20:37:34.011000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.251 [2024-11-18 20:37:34.011024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.251 [2024-11-18 20:37:34.011038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.251 [2024-11-18 20:37:34.011051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.251 [2024-11-18 20:37:34.011080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.251 qpair failed and we were unable to recover it. 00:36:22.251 [2024-11-18 20:37:34.020939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.251 [2024-11-18 20:37:34.021027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.251 [2024-11-18 20:37:34.021051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.251 [2024-11-18 20:37:34.021065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.251 [2024-11-18 20:37:34.021077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.251 [2024-11-18 20:37:34.021106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.251 qpair failed and we were unable to recover it. 00:36:22.251 [2024-11-18 20:37:34.030976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.251 [2024-11-18 20:37:34.031065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.251 [2024-11-18 20:37:34.031088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.251 [2024-11-18 20:37:34.031103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.251 [2024-11-18 20:37:34.031116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.251 [2024-11-18 20:37:34.031145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.251 qpair failed and we were unable to recover it. 00:36:22.251 [2024-11-18 20:37:34.040996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.251 [2024-11-18 20:37:34.041085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.251 [2024-11-18 20:37:34.041110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.251 [2024-11-18 20:37:34.041124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.251 [2024-11-18 20:37:34.041137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.251 [2024-11-18 20:37:34.041166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.251 qpair failed and we were unable to recover it. 00:36:22.251 [2024-11-18 20:37:34.051083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.251 [2024-11-18 20:37:34.051170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.251 [2024-11-18 20:37:34.051199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.251 [2024-11-18 20:37:34.051215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.251 [2024-11-18 20:37:34.051227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.251 [2024-11-18 20:37:34.051256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.251 qpair failed and we were unable to recover it. 00:36:22.251 [2024-11-18 20:37:34.061046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.251 [2024-11-18 20:37:34.061144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.251 [2024-11-18 20:37:34.061167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.251 [2024-11-18 20:37:34.061182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.251 [2024-11-18 20:37:34.061194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.251 [2024-11-18 20:37:34.061222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.251 qpair failed and we were unable to recover it. 00:36:22.251 [2024-11-18 20:37:34.071129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.251 [2024-11-18 20:37:34.071214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.251 [2024-11-18 20:37:34.071238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.251 [2024-11-18 20:37:34.071252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.251 [2024-11-18 20:37:34.071265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.251 [2024-11-18 20:37:34.071293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.251 qpair failed and we were unable to recover it. 00:36:22.251 [2024-11-18 20:37:34.081224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.251 [2024-11-18 20:37:34.081317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.251 [2024-11-18 20:37:34.081341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.251 [2024-11-18 20:37:34.081356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.251 [2024-11-18 20:37:34.081369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.251 [2024-11-18 20:37:34.081397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.251 qpair failed and we were unable to recover it. 00:36:22.251 [2024-11-18 20:37:34.091136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.251 [2024-11-18 20:37:34.091231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.252 [2024-11-18 20:37:34.091254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.252 [2024-11-18 20:37:34.091269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.252 [2024-11-18 20:37:34.091286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.252 [2024-11-18 20:37:34.091316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.252 qpair failed and we were unable to recover it. 00:36:22.252 [2024-11-18 20:37:34.101209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.252 [2024-11-18 20:37:34.101293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.252 [2024-11-18 20:37:34.101318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.252 [2024-11-18 20:37:34.101332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.252 [2024-11-18 20:37:34.101345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.252 [2024-11-18 20:37:34.101373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.252 qpair failed and we were unable to recover it. 00:36:22.252 [2024-11-18 20:37:34.111233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.252 [2024-11-18 20:37:34.111321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.252 [2024-11-18 20:37:34.111345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.252 [2024-11-18 20:37:34.111360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.252 [2024-11-18 20:37:34.111372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.252 [2024-11-18 20:37:34.111400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.252 qpair failed and we were unable to recover it. 00:36:22.252 [2024-11-18 20:37:34.121231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.252 [2024-11-18 20:37:34.121324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.252 [2024-11-18 20:37:34.121348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.252 [2024-11-18 20:37:34.121362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.252 [2024-11-18 20:37:34.121374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.252 [2024-11-18 20:37:34.121403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.252 qpair failed and we were unable to recover it. 00:36:22.252 [2024-11-18 20:37:34.131377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.252 [2024-11-18 20:37:34.131472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.252 [2024-11-18 20:37:34.131496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.252 [2024-11-18 20:37:34.131510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.252 [2024-11-18 20:37:34.131523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.252 [2024-11-18 20:37:34.131552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.252 qpair failed and we were unable to recover it. 00:36:22.252 [2024-11-18 20:37:34.141288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.252 [2024-11-18 20:37:34.141375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.252 [2024-11-18 20:37:34.141399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.252 [2024-11-18 20:37:34.141413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.252 [2024-11-18 20:37:34.141426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.252 [2024-11-18 20:37:34.141454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.252 qpair failed and we were unable to recover it. 00:36:22.252 [2024-11-18 20:37:34.151352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.252 [2024-11-18 20:37:34.151455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.252 [2024-11-18 20:37:34.151479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.252 [2024-11-18 20:37:34.151493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.252 [2024-11-18 20:37:34.151506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.252 [2024-11-18 20:37:34.151533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.252 qpair failed and we were unable to recover it. 00:36:22.252 [2024-11-18 20:37:34.161364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.252 [2024-11-18 20:37:34.161454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.252 [2024-11-18 20:37:34.161478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.252 [2024-11-18 20:37:34.161493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.252 [2024-11-18 20:37:34.161505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.252 [2024-11-18 20:37:34.161535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.252 qpair failed and we were unable to recover it. 00:36:22.252 [2024-11-18 20:37:34.171446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.252 [2024-11-18 20:37:34.171533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.252 [2024-11-18 20:37:34.171558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.252 [2024-11-18 20:37:34.171573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.252 [2024-11-18 20:37:34.171586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.252 [2024-11-18 20:37:34.171615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.252 qpair failed and we were unable to recover it. 00:36:22.252 [2024-11-18 20:37:34.181414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.252 [2024-11-18 20:37:34.181496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.252 [2024-11-18 20:37:34.181525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.252 [2024-11-18 20:37:34.181540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.252 [2024-11-18 20:37:34.181553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.252 [2024-11-18 20:37:34.181581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.252 qpair failed and we were unable to recover it. 00:36:22.252 [2024-11-18 20:37:34.191506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.252 [2024-11-18 20:37:34.191607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.252 [2024-11-18 20:37:34.191632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.252 [2024-11-18 20:37:34.191655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.252 [2024-11-18 20:37:34.191668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.252 [2024-11-18 20:37:34.191697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.252 qpair failed and we were unable to recover it. 00:36:22.252 [2024-11-18 20:37:34.201484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.252 [2024-11-18 20:37:34.201573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.252 [2024-11-18 20:37:34.201598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.252 [2024-11-18 20:37:34.201612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.252 [2024-11-18 20:37:34.201624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.252 [2024-11-18 20:37:34.201663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.252 qpair failed and we were unable to recover it. 00:36:22.252 [2024-11-18 20:37:34.211535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.252 [2024-11-18 20:37:34.211623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.252 [2024-11-18 20:37:34.211658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.252 [2024-11-18 20:37:34.211673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.252 [2024-11-18 20:37:34.211686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.252 [2024-11-18 20:37:34.211715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.252 qpair failed and we were unable to recover it. 00:36:22.252 [2024-11-18 20:37:34.221594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.252 [2024-11-18 20:37:34.221709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.252 [2024-11-18 20:37:34.221735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.253 [2024-11-18 20:37:34.221750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.253 [2024-11-18 20:37:34.221768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.253 [2024-11-18 20:37:34.221798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.253 qpair failed and we were unable to recover it. 00:36:22.253 [2024-11-18 20:37:34.231544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.253 [2024-11-18 20:37:34.231643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.253 [2024-11-18 20:37:34.231676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.253 [2024-11-18 20:37:34.231692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.253 [2024-11-18 20:37:34.231705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.253 [2024-11-18 20:37:34.231734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.253 qpair failed and we were unable to recover it. 00:36:22.253 [2024-11-18 20:37:34.241585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.253 [2024-11-18 20:37:34.241732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.253 [2024-11-18 20:37:34.241759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.253 [2024-11-18 20:37:34.241774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.253 [2024-11-18 20:37:34.241786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.253 [2024-11-18 20:37:34.241815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.253 qpair failed and we were unable to recover it. 00:36:22.253 [2024-11-18 20:37:34.251630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.253 [2024-11-18 20:37:34.251725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.253 [2024-11-18 20:37:34.251749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.253 [2024-11-18 20:37:34.251763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.253 [2024-11-18 20:37:34.251776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.253 [2024-11-18 20:37:34.251805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.253 qpair failed and we were unable to recover it. 00:36:22.512 [2024-11-18 20:37:34.261661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.512 [2024-11-18 20:37:34.261744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.512 [2024-11-18 20:37:34.261769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.512 [2024-11-18 20:37:34.261784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.512 [2024-11-18 20:37:34.261797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.512 [2024-11-18 20:37:34.261828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.512 qpair failed and we were unable to recover it. 00:36:22.512 [2024-11-18 20:37:34.271708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.512 [2024-11-18 20:37:34.271797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.512 [2024-11-18 20:37:34.271822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.512 [2024-11-18 20:37:34.271837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.512 [2024-11-18 20:37:34.271849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.512 [2024-11-18 20:37:34.271878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.512 qpair failed and we were unable to recover it. 00:36:22.512 [2024-11-18 20:37:34.281699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.512 [2024-11-18 20:37:34.281784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.512 [2024-11-18 20:37:34.281808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.512 [2024-11-18 20:37:34.281822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.512 [2024-11-18 20:37:34.281835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.513 [2024-11-18 20:37:34.281864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.513 qpair failed and we were unable to recover it. 00:36:22.513 [2024-11-18 20:37:34.291725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.513 [2024-11-18 20:37:34.291809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.513 [2024-11-18 20:37:34.291833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.513 [2024-11-18 20:37:34.291847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.513 [2024-11-18 20:37:34.291860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.513 [2024-11-18 20:37:34.291888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.513 qpair failed and we were unable to recover it. 00:36:22.513 [2024-11-18 20:37:34.301787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.513 [2024-11-18 20:37:34.301874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.513 [2024-11-18 20:37:34.301898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.513 [2024-11-18 20:37:34.301913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.513 [2024-11-18 20:37:34.301925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.513 [2024-11-18 20:37:34.301954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.513 qpair failed and we were unable to recover it. 00:36:22.513 [2024-11-18 20:37:34.311791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.513 [2024-11-18 20:37:34.311881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.513 [2024-11-18 20:37:34.311910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.513 [2024-11-18 20:37:34.311925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.513 [2024-11-18 20:37:34.311939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.513 [2024-11-18 20:37:34.311967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.513 qpair failed and we were unable to recover it. 00:36:22.513 [2024-11-18 20:37:34.321877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.513 [2024-11-18 20:37:34.321966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.513 [2024-11-18 20:37:34.321990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.513 [2024-11-18 20:37:34.322004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.513 [2024-11-18 20:37:34.322017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.513 [2024-11-18 20:37:34.322045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.513 qpair failed and we were unable to recover it. 00:36:22.513 [2024-11-18 20:37:34.331825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.513 [2024-11-18 20:37:34.331911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.513 [2024-11-18 20:37:34.331936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.513 [2024-11-18 20:37:34.331951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.513 [2024-11-18 20:37:34.331964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.513 [2024-11-18 20:37:34.331992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.513 qpair failed and we were unable to recover it. 00:36:22.513 [2024-11-18 20:37:34.341868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.513 [2024-11-18 20:37:34.341954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.513 [2024-11-18 20:37:34.341979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.513 [2024-11-18 20:37:34.341993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.513 [2024-11-18 20:37:34.342006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.513 [2024-11-18 20:37:34.342034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.513 qpair failed and we were unable to recover it. 00:36:22.513 [2024-11-18 20:37:34.351955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.513 [2024-11-18 20:37:34.352044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.513 [2024-11-18 20:37:34.352068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.513 [2024-11-18 20:37:34.352082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.513 [2024-11-18 20:37:34.352100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.513 [2024-11-18 20:37:34.352129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.513 qpair failed and we were unable to recover it. 00:36:22.513 [2024-11-18 20:37:34.361936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.513 [2024-11-18 20:37:34.362057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.513 [2024-11-18 20:37:34.362082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.513 [2024-11-18 20:37:34.362098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.513 [2024-11-18 20:37:34.362111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.513 [2024-11-18 20:37:34.362139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.513 qpair failed and we were unable to recover it. 00:36:22.513 [2024-11-18 20:37:34.371940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.513 [2024-11-18 20:37:34.372035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.513 [2024-11-18 20:37:34.372059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.513 [2024-11-18 20:37:34.372074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.513 [2024-11-18 20:37:34.372087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.513 [2024-11-18 20:37:34.372115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.513 qpair failed and we were unable to recover it. 00:36:22.513 [2024-11-18 20:37:34.382078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.513 [2024-11-18 20:37:34.382213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.513 [2024-11-18 20:37:34.382240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.513 [2024-11-18 20:37:34.382255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.513 [2024-11-18 20:37:34.382267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.513 [2024-11-18 20:37:34.382295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.513 qpair failed and we were unable to recover it. 00:36:22.513 [2024-11-18 20:37:34.392100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.513 [2024-11-18 20:37:34.392201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.513 [2024-11-18 20:37:34.392227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.513 [2024-11-18 20:37:34.392241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.513 [2024-11-18 20:37:34.392253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.513 [2024-11-18 20:37:34.392282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.513 qpair failed and we were unable to recover it. 00:36:22.513 [2024-11-18 20:37:34.402081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.513 [2024-11-18 20:37:34.402175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.513 [2024-11-18 20:37:34.402199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.513 [2024-11-18 20:37:34.402213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.513 [2024-11-18 20:37:34.402225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.513 [2024-11-18 20:37:34.402253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.513 qpair failed and we were unable to recover it. 00:36:22.513 [2024-11-18 20:37:34.412088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.513 [2024-11-18 20:37:34.412179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.513 [2024-11-18 20:37:34.412210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.513 [2024-11-18 20:37:34.412225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.513 [2024-11-18 20:37:34.412237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.513 [2024-11-18 20:37:34.412265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.513 qpair failed and we were unable to recover it. 00:36:22.514 [2024-11-18 20:37:34.422129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.514 [2024-11-18 20:37:34.422218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.514 [2024-11-18 20:37:34.422243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.514 [2024-11-18 20:37:34.422257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.514 [2024-11-18 20:37:34.422269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.514 [2024-11-18 20:37:34.422298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.514 qpair failed and we were unable to recover it. 00:36:22.514 [2024-11-18 20:37:34.432132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.514 [2024-11-18 20:37:34.432228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.514 [2024-11-18 20:37:34.432252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.514 [2024-11-18 20:37:34.432267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.514 [2024-11-18 20:37:34.432279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.514 [2024-11-18 20:37:34.432308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.514 qpair failed and we were unable to recover it. 00:36:22.514 [2024-11-18 20:37:34.442296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.514 [2024-11-18 20:37:34.442385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.514 [2024-11-18 20:37:34.442415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.514 [2024-11-18 20:37:34.442429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.514 [2024-11-18 20:37:34.442442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.514 [2024-11-18 20:37:34.442471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.514 qpair failed and we were unable to recover it. 00:36:22.514 [2024-11-18 20:37:34.452162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.514 [2024-11-18 20:37:34.452254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.514 [2024-11-18 20:37:34.452278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.514 [2024-11-18 20:37:34.452303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.514 [2024-11-18 20:37:34.452316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.514 [2024-11-18 20:37:34.452344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.514 qpair failed and we were unable to recover it. 00:36:22.514 [2024-11-18 20:37:34.462218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.514 [2024-11-18 20:37:34.462357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.514 [2024-11-18 20:37:34.462383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.514 [2024-11-18 20:37:34.462397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.514 [2024-11-18 20:37:34.462409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.514 [2024-11-18 20:37:34.462438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.514 qpair failed and we were unable to recover it. 00:36:22.514 [2024-11-18 20:37:34.472240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.514 [2024-11-18 20:37:34.472337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.514 [2024-11-18 20:37:34.472361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.514 [2024-11-18 20:37:34.472377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.514 [2024-11-18 20:37:34.472389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.514 [2024-11-18 20:37:34.472427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.514 qpair failed and we were unable to recover it. 00:36:22.514 [2024-11-18 20:37:34.482310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.514 [2024-11-18 20:37:34.482407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.514 [2024-11-18 20:37:34.482434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.514 [2024-11-18 20:37:34.482453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.514 [2024-11-18 20:37:34.482472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.514 [2024-11-18 20:37:34.482502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.514 qpair failed and we were unable to recover it. 00:36:22.514 [2024-11-18 20:37:34.492303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.514 [2024-11-18 20:37:34.492396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.514 [2024-11-18 20:37:34.492421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.514 [2024-11-18 20:37:34.492435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.514 [2024-11-18 20:37:34.492447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.514 [2024-11-18 20:37:34.492475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.514 qpair failed and we were unable to recover it. 00:36:22.514 [2024-11-18 20:37:34.502303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.514 [2024-11-18 20:37:34.502390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.514 [2024-11-18 20:37:34.502415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.514 [2024-11-18 20:37:34.502429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.514 [2024-11-18 20:37:34.502441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.514 [2024-11-18 20:37:34.502470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.514 qpair failed and we were unable to recover it. 00:36:22.514 [2024-11-18 20:37:34.512432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.514 [2024-11-18 20:37:34.512528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.514 [2024-11-18 20:37:34.512554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.514 [2024-11-18 20:37:34.512583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.514 [2024-11-18 20:37:34.512596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.514 [2024-11-18 20:37:34.512623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.514 qpair failed and we were unable to recover it. 00:36:22.774 [2024-11-18 20:37:34.522367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.774 [2024-11-18 20:37:34.522454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.774 [2024-11-18 20:37:34.522479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.774 [2024-11-18 20:37:34.522494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.774 [2024-11-18 20:37:34.522506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.774 [2024-11-18 20:37:34.522535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.774 qpair failed and we were unable to recover it. 00:36:22.774 [2024-11-18 20:37:34.532451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.774 [2024-11-18 20:37:34.532548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.774 [2024-11-18 20:37:34.532574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.774 [2024-11-18 20:37:34.532588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.774 [2024-11-18 20:37:34.532601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.774 [2024-11-18 20:37:34.532630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.774 qpair failed and we were unable to recover it. 00:36:22.774 [2024-11-18 20:37:34.542414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.774 [2024-11-18 20:37:34.542503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.774 [2024-11-18 20:37:34.542527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.774 [2024-11-18 20:37:34.542542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.774 [2024-11-18 20:37:34.542554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.774 [2024-11-18 20:37:34.542582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.774 qpair failed and we were unable to recover it. 00:36:22.774 [2024-11-18 20:37:34.552542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.774 [2024-11-18 20:37:34.552644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.774 [2024-11-18 20:37:34.552669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.774 [2024-11-18 20:37:34.552683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.774 [2024-11-18 20:37:34.552695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.774 [2024-11-18 20:37:34.552724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.774 qpair failed and we were unable to recover it. 00:36:22.774 [2024-11-18 20:37:34.562471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.774 [2024-11-18 20:37:34.562598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.774 [2024-11-18 20:37:34.562625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.774 [2024-11-18 20:37:34.562650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.774 [2024-11-18 20:37:34.562665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.774 [2024-11-18 20:37:34.562694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.774 qpair failed and we were unable to recover it. 00:36:22.774 [2024-11-18 20:37:34.572490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.774 [2024-11-18 20:37:34.572582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.774 [2024-11-18 20:37:34.572627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.774 [2024-11-18 20:37:34.572651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.774 [2024-11-18 20:37:34.572664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.774 [2024-11-18 20:37:34.572693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.774 qpair failed and we were unable to recover it. 00:36:22.774 [2024-11-18 20:37:34.582565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.774 [2024-11-18 20:37:34.582670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.774 [2024-11-18 20:37:34.582695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.774 [2024-11-18 20:37:34.582709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.774 [2024-11-18 20:37:34.582723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.774 [2024-11-18 20:37:34.582753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.774 qpair failed and we were unable to recover it. 00:36:22.774 [2024-11-18 20:37:34.592569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.774 [2024-11-18 20:37:34.592669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.774 [2024-11-18 20:37:34.592693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.774 [2024-11-18 20:37:34.592707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.774 [2024-11-18 20:37:34.592719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.774 [2024-11-18 20:37:34.592748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.774 qpair failed and we were unable to recover it. 00:36:22.774 [2024-11-18 20:37:34.602655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.774 [2024-11-18 20:37:34.602749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.774 [2024-11-18 20:37:34.602775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.774 [2024-11-18 20:37:34.602790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.774 [2024-11-18 20:37:34.602802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.774 [2024-11-18 20:37:34.602831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.774 qpair failed and we were unable to recover it. 00:36:22.774 [2024-11-18 20:37:34.612618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.775 [2024-11-18 20:37:34.612764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.775 [2024-11-18 20:37:34.612794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.775 [2024-11-18 20:37:34.612810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.775 [2024-11-18 20:37:34.612829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.775 [2024-11-18 20:37:34.612859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.775 qpair failed and we were unable to recover it. 00:36:22.775 [2024-11-18 20:37:34.622692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.775 [2024-11-18 20:37:34.622833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.775 [2024-11-18 20:37:34.622863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.775 [2024-11-18 20:37:34.622880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.775 [2024-11-18 20:37:34.622893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.775 [2024-11-18 20:37:34.622923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.775 qpair failed and we were unable to recover it. 00:36:22.775 [2024-11-18 20:37:34.632699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.775 [2024-11-18 20:37:34.632805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.775 [2024-11-18 20:37:34.632831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.775 [2024-11-18 20:37:34.632847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.775 [2024-11-18 20:37:34.632860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.775 [2024-11-18 20:37:34.632889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.775 qpair failed and we were unable to recover it. 00:36:22.775 [2024-11-18 20:37:34.642800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.775 [2024-11-18 20:37:34.642939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.775 [2024-11-18 20:37:34.642965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.775 [2024-11-18 20:37:34.642980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.775 [2024-11-18 20:37:34.642992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.775 [2024-11-18 20:37:34.643021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.775 qpair failed and we were unable to recover it. 00:36:22.775 [2024-11-18 20:37:34.652807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.775 [2024-11-18 20:37:34.652905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.775 [2024-11-18 20:37:34.652930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.775 [2024-11-18 20:37:34.652945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.775 [2024-11-18 20:37:34.652959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.775 [2024-11-18 20:37:34.652987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.775 qpair failed and we were unable to recover it. 00:36:22.775 [2024-11-18 20:37:34.662782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.775 [2024-11-18 20:37:34.662901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.775 [2024-11-18 20:37:34.662926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.775 [2024-11-18 20:37:34.662940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.775 [2024-11-18 20:37:34.662953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.775 [2024-11-18 20:37:34.662980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.775 qpair failed and we were unable to recover it. 00:36:22.775 [2024-11-18 20:37:34.672934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.775 [2024-11-18 20:37:34.673034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.775 [2024-11-18 20:37:34.673059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.775 [2024-11-18 20:37:34.673073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.775 [2024-11-18 20:37:34.673085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.775 [2024-11-18 20:37:34.673114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.775 qpair failed and we were unable to recover it. 00:36:22.775 [2024-11-18 20:37:34.682848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.775 [2024-11-18 20:37:34.682935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.775 [2024-11-18 20:37:34.682960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.775 [2024-11-18 20:37:34.682974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.775 [2024-11-18 20:37:34.682986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.775 [2024-11-18 20:37:34.683015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.775 qpair failed and we were unable to recover it. 00:36:22.775 [2024-11-18 20:37:34.692908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.775 [2024-11-18 20:37:34.693000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.775 [2024-11-18 20:37:34.693024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.775 [2024-11-18 20:37:34.693039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.775 [2024-11-18 20:37:34.693051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.775 [2024-11-18 20:37:34.693080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.775 qpair failed and we were unable to recover it. 00:36:22.775 [2024-11-18 20:37:34.702872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.775 [2024-11-18 20:37:34.703006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.775 [2024-11-18 20:37:34.703038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.775 [2024-11-18 20:37:34.703053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.775 [2024-11-18 20:37:34.703066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.775 [2024-11-18 20:37:34.703094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.775 qpair failed and we were unable to recover it. 00:36:22.775 [2024-11-18 20:37:34.712971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.775 [2024-11-18 20:37:34.713072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.775 [2024-11-18 20:37:34.713097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.775 [2024-11-18 20:37:34.713112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.775 [2024-11-18 20:37:34.713124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.775 [2024-11-18 20:37:34.713152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.775 qpair failed and we were unable to recover it. 00:36:22.775 [2024-11-18 20:37:34.722931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.775 [2024-11-18 20:37:34.723023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.775 [2024-11-18 20:37:34.723047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.775 [2024-11-18 20:37:34.723061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.775 [2024-11-18 20:37:34.723078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.775 [2024-11-18 20:37:34.723107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.775 qpair failed and we were unable to recover it. 00:36:22.775 [2024-11-18 20:37:34.732976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.775 [2024-11-18 20:37:34.733062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.775 [2024-11-18 20:37:34.733086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.775 [2024-11-18 20:37:34.733100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.775 [2024-11-18 20:37:34.733112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.775 [2024-11-18 20:37:34.733141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.775 qpair failed and we were unable to recover it. 00:36:22.775 [2024-11-18 20:37:34.743007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.776 [2024-11-18 20:37:34.743095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.776 [2024-11-18 20:37:34.743119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.776 [2024-11-18 20:37:34.743134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.776 [2024-11-18 20:37:34.743151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.776 [2024-11-18 20:37:34.743180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.776 qpair failed and we were unable to recover it. 00:36:22.776 [2024-11-18 20:37:34.753033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.776 [2024-11-18 20:37:34.753121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.776 [2024-11-18 20:37:34.753145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.776 [2024-11-18 20:37:34.753160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.776 [2024-11-18 20:37:34.753172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.776 [2024-11-18 20:37:34.753200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.776 qpair failed and we were unable to recover it. 00:36:22.776 [2024-11-18 20:37:34.763085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.776 [2024-11-18 20:37:34.763177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.776 [2024-11-18 20:37:34.763201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.776 [2024-11-18 20:37:34.763215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.776 [2024-11-18 20:37:34.763228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.776 [2024-11-18 20:37:34.763256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.776 qpair failed and we were unable to recover it. 00:36:22.776 [2024-11-18 20:37:34.773070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:22.776 [2024-11-18 20:37:34.773166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:22.776 [2024-11-18 20:37:34.773191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:22.776 [2024-11-18 20:37:34.773206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:22.776 [2024-11-18 20:37:34.773218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:22.776 [2024-11-18 20:37:34.773246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:22.776 qpair failed and we were unable to recover it. 00:36:23.037 [2024-11-18 20:37:34.783125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.037 [2024-11-18 20:37:34.783247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.037 [2024-11-18 20:37:34.783273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.037 [2024-11-18 20:37:34.783288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.037 [2024-11-18 20:37:34.783301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.037 [2024-11-18 20:37:34.783330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.037 qpair failed and we were unable to recover it. 00:36:23.037 [2024-11-18 20:37:34.793126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.037 [2024-11-18 20:37:34.793219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.037 [2024-11-18 20:37:34.793243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.037 [2024-11-18 20:37:34.793258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.037 [2024-11-18 20:37:34.793270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.037 [2024-11-18 20:37:34.793299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.037 qpair failed and we were unable to recover it. 00:36:23.037 [2024-11-18 20:37:34.803314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.037 [2024-11-18 20:37:34.803452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.037 [2024-11-18 20:37:34.803476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.037 [2024-11-18 20:37:34.803491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.037 [2024-11-18 20:37:34.803504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.037 [2024-11-18 20:37:34.803547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.037 qpair failed and we were unable to recover it. 00:36:23.037 [2024-11-18 20:37:34.813216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.037 [2024-11-18 20:37:34.813302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.037 [2024-11-18 20:37:34.813326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.037 [2024-11-18 20:37:34.813341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.037 [2024-11-18 20:37:34.813353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.037 [2024-11-18 20:37:34.813381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.037 qpair failed and we were unable to recover it. 00:36:23.037 [2024-11-18 20:37:34.823210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.037 [2024-11-18 20:37:34.823330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.037 [2024-11-18 20:37:34.823355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.037 [2024-11-18 20:37:34.823369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.037 [2024-11-18 20:37:34.823382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.037 [2024-11-18 20:37:34.823410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.037 qpair failed and we were unable to recover it. 00:36:23.037 [2024-11-18 20:37:34.833253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.037 [2024-11-18 20:37:34.833343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.037 [2024-11-18 20:37:34.833373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.037 [2024-11-18 20:37:34.833388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.037 [2024-11-18 20:37:34.833400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.037 [2024-11-18 20:37:34.833429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.037 qpair failed and we were unable to recover it. 00:36:23.037 [2024-11-18 20:37:34.843339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.037 [2024-11-18 20:37:34.843437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.037 [2024-11-18 20:37:34.843461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.037 [2024-11-18 20:37:34.843476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.037 [2024-11-18 20:37:34.843489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.037 [2024-11-18 20:37:34.843517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.037 qpair failed and we were unable to recover it. 00:36:23.038 [2024-11-18 20:37:34.853316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.038 [2024-11-18 20:37:34.853403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.038 [2024-11-18 20:37:34.853427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.038 [2024-11-18 20:37:34.853442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.038 [2024-11-18 20:37:34.853455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.038 [2024-11-18 20:37:34.853484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.038 qpair failed and we were unable to recover it. 00:36:23.038 [2024-11-18 20:37:34.863336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.038 [2024-11-18 20:37:34.863421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.038 [2024-11-18 20:37:34.863446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.038 [2024-11-18 20:37:34.863461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.038 [2024-11-18 20:37:34.863474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.038 [2024-11-18 20:37:34.863502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.038 qpair failed and we were unable to recover it. 00:36:23.038 [2024-11-18 20:37:34.873378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.038 [2024-11-18 20:37:34.873468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.038 [2024-11-18 20:37:34.873494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.038 [2024-11-18 20:37:34.873509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.038 [2024-11-18 20:37:34.873527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.038 [2024-11-18 20:37:34.873557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.038 qpair failed and we were unable to recover it. 00:36:23.038 [2024-11-18 20:37:34.883424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.038 [2024-11-18 20:37:34.883539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.038 [2024-11-18 20:37:34.883563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.038 [2024-11-18 20:37:34.883577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.038 [2024-11-18 20:37:34.883590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.038 [2024-11-18 20:37:34.883619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.038 qpair failed and we were unable to recover it. 00:36:23.038 [2024-11-18 20:37:34.893472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.038 [2024-11-18 20:37:34.893559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.038 [2024-11-18 20:37:34.893583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.038 [2024-11-18 20:37:34.893598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.038 [2024-11-18 20:37:34.893610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.038 [2024-11-18 20:37:34.893647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.038 qpair failed and we were unable to recover it. 00:36:23.038 [2024-11-18 20:37:34.903448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.038 [2024-11-18 20:37:34.903557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.038 [2024-11-18 20:37:34.903582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.038 [2024-11-18 20:37:34.903596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.038 [2024-11-18 20:37:34.903609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.038 [2024-11-18 20:37:34.903646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.038 qpair failed and we were unable to recover it. 00:36:23.038 [2024-11-18 20:37:34.913488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.038 [2024-11-18 20:37:34.913577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.038 [2024-11-18 20:37:34.913602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.038 [2024-11-18 20:37:34.913617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.038 [2024-11-18 20:37:34.913630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.038 [2024-11-18 20:37:34.913667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.038 qpair failed and we were unable to recover it. 00:36:23.038 [2024-11-18 20:37:34.923493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.038 [2024-11-18 20:37:34.923588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.038 [2024-11-18 20:37:34.923614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.038 [2024-11-18 20:37:34.923629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.038 [2024-11-18 20:37:34.923651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.038 [2024-11-18 20:37:34.923681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.038 qpair failed and we were unable to recover it. 00:36:23.038 [2024-11-18 20:37:34.933599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.038 [2024-11-18 20:37:34.933745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.038 [2024-11-18 20:37:34.933774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.038 [2024-11-18 20:37:34.933790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.038 [2024-11-18 20:37:34.933803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.038 [2024-11-18 20:37:34.933833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.038 qpair failed and we were unable to recover it. 00:36:23.038 [2024-11-18 20:37:34.943548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.038 [2024-11-18 20:37:34.943651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.038 [2024-11-18 20:37:34.943676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.038 [2024-11-18 20:37:34.943691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.038 [2024-11-18 20:37:34.943703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.038 [2024-11-18 20:37:34.943733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.038 qpair failed and we were unable to recover it. 00:36:23.038 [2024-11-18 20:37:34.953623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.038 [2024-11-18 20:37:34.953747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.038 [2024-11-18 20:37:34.953772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.038 [2024-11-18 20:37:34.953786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.038 [2024-11-18 20:37:34.953799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.038 [2024-11-18 20:37:34.953828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.038 qpair failed and we were unable to recover it. 00:36:23.038 [2024-11-18 20:37:34.963623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.038 [2024-11-18 20:37:34.963722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.038 [2024-11-18 20:37:34.963752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.038 [2024-11-18 20:37:34.963767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.038 [2024-11-18 20:37:34.963780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.038 [2024-11-18 20:37:34.963808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.038 qpair failed and we were unable to recover it. 00:36:23.038 [2024-11-18 20:37:34.973662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.038 [2024-11-18 20:37:34.973786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.038 [2024-11-18 20:37:34.973810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.038 [2024-11-18 20:37:34.973824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.038 [2024-11-18 20:37:34.973837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.038 [2024-11-18 20:37:34.973866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.038 qpair failed and we were unable to recover it. 00:36:23.039 [2024-11-18 20:37:34.983693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.039 [2024-11-18 20:37:34.983821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.039 [2024-11-18 20:37:34.983846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.039 [2024-11-18 20:37:34.983860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.039 [2024-11-18 20:37:34.983873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.039 [2024-11-18 20:37:34.983901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.039 qpair failed and we were unable to recover it. 00:36:23.039 [2024-11-18 20:37:34.993745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.039 [2024-11-18 20:37:34.993847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.039 [2024-11-18 20:37:34.993872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.039 [2024-11-18 20:37:34.993886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.039 [2024-11-18 20:37:34.993899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.039 [2024-11-18 20:37:34.993927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.039 qpair failed and we were unable to recover it. 00:36:23.039 [2024-11-18 20:37:35.003766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.039 [2024-11-18 20:37:35.003855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.039 [2024-11-18 20:37:35.003880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.039 [2024-11-18 20:37:35.003894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.039 [2024-11-18 20:37:35.003912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.039 [2024-11-18 20:37:35.003942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.039 qpair failed and we were unable to recover it. 00:36:23.039 [2024-11-18 20:37:35.013800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.039 [2024-11-18 20:37:35.013883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.039 [2024-11-18 20:37:35.013908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.039 [2024-11-18 20:37:35.013922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.039 [2024-11-18 20:37:35.013935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.039 [2024-11-18 20:37:35.013963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.039 qpair failed and we were unable to recover it. 00:36:23.039 [2024-11-18 20:37:35.023834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.039 [2024-11-18 20:37:35.023919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.039 [2024-11-18 20:37:35.023943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.039 [2024-11-18 20:37:35.023958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.039 [2024-11-18 20:37:35.023971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.039 [2024-11-18 20:37:35.023999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.039 qpair failed and we were unable to recover it. 00:36:23.039 [2024-11-18 20:37:35.033878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.039 [2024-11-18 20:37:35.033977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.039 [2024-11-18 20:37:35.034002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.039 [2024-11-18 20:37:35.034017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.039 [2024-11-18 20:37:35.034029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.039 [2024-11-18 20:37:35.034057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.039 qpair failed and we were unable to recover it. 00:36:23.298 [2024-11-18 20:37:35.043869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.298 [2024-11-18 20:37:35.043958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.298 [2024-11-18 20:37:35.043983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.298 [2024-11-18 20:37:35.043998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.298 [2024-11-18 20:37:35.044010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.298 [2024-11-18 20:37:35.044040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.298 qpair failed and we were unable to recover it. 00:36:23.298 [2024-11-18 20:37:35.053877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.298 [2024-11-18 20:37:35.053966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.299 [2024-11-18 20:37:35.053991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.299 [2024-11-18 20:37:35.054007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.299 [2024-11-18 20:37:35.054021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.299 [2024-11-18 20:37:35.054049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.299 qpair failed and we were unable to recover it. 00:36:23.299 [2024-11-18 20:37:35.063989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.299 [2024-11-18 20:37:35.064077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.299 [2024-11-18 20:37:35.064102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.299 [2024-11-18 20:37:35.064116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.299 [2024-11-18 20:37:35.064129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.299 [2024-11-18 20:37:35.064158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.299 qpair failed and we were unable to recover it. 00:36:23.299 [2024-11-18 20:37:35.074002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.299 [2024-11-18 20:37:35.074094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.299 [2024-11-18 20:37:35.074119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.299 [2024-11-18 20:37:35.074133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.299 [2024-11-18 20:37:35.074146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.299 [2024-11-18 20:37:35.074174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.299 qpair failed and we were unable to recover it. 00:36:23.299 [2024-11-18 20:37:35.083975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.299 [2024-11-18 20:37:35.084063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.299 [2024-11-18 20:37:35.084087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.299 [2024-11-18 20:37:35.084102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.299 [2024-11-18 20:37:35.084114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.299 [2024-11-18 20:37:35.084143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.299 qpair failed and we were unable to recover it. 00:36:23.299 [2024-11-18 20:37:35.094015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.299 [2024-11-18 20:37:35.094097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.299 [2024-11-18 20:37:35.094126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.299 [2024-11-18 20:37:35.094141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.299 [2024-11-18 20:37:35.094154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.299 [2024-11-18 20:37:35.094182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.299 qpair failed and we were unable to recover it. 00:36:23.299 [2024-11-18 20:37:35.104051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.299 [2024-11-18 20:37:35.104178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.299 [2024-11-18 20:37:35.104203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.299 [2024-11-18 20:37:35.104218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.299 [2024-11-18 20:37:35.104231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.299 [2024-11-18 20:37:35.104259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.299 qpair failed and we were unable to recover it. 00:36:23.299 [2024-11-18 20:37:35.114102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.299 [2024-11-18 20:37:35.114209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.299 [2024-11-18 20:37:35.114233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.299 [2024-11-18 20:37:35.114247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.299 [2024-11-18 20:37:35.114259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.299 [2024-11-18 20:37:35.114288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.299 qpair failed and we were unable to recover it. 00:36:23.299 [2024-11-18 20:37:35.124131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.299 [2024-11-18 20:37:35.124223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.299 [2024-11-18 20:37:35.124247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.299 [2024-11-18 20:37:35.124261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.299 [2024-11-18 20:37:35.124274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.299 [2024-11-18 20:37:35.124303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.299 qpair failed and we were unable to recover it. 00:36:23.299 [2024-11-18 20:37:35.134087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.299 [2024-11-18 20:37:35.134170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.299 [2024-11-18 20:37:35.134195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.299 [2024-11-18 20:37:35.134210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.299 [2024-11-18 20:37:35.134228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.299 [2024-11-18 20:37:35.134257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.299 qpair failed and we were unable to recover it. 00:36:23.299 [2024-11-18 20:37:35.144150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.299 [2024-11-18 20:37:35.144235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.299 [2024-11-18 20:37:35.144260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.299 [2024-11-18 20:37:35.144275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.299 [2024-11-18 20:37:35.144288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.299 [2024-11-18 20:37:35.144317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.299 qpair failed and we were unable to recover it. 00:36:23.299 [2024-11-18 20:37:35.154196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.299 [2024-11-18 20:37:35.154323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.299 [2024-11-18 20:37:35.154347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.299 [2024-11-18 20:37:35.154361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.299 [2024-11-18 20:37:35.154375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.299 [2024-11-18 20:37:35.154403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.299 qpair failed and we were unable to recover it. 00:36:23.299 [2024-11-18 20:37:35.164262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.299 [2024-11-18 20:37:35.164355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.299 [2024-11-18 20:37:35.164381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.299 [2024-11-18 20:37:35.164400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.299 [2024-11-18 20:37:35.164415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.299 [2024-11-18 20:37:35.164445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.299 qpair failed and we were unable to recover it. 00:36:23.299 [2024-11-18 20:37:35.174213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.299 [2024-11-18 20:37:35.174294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.299 [2024-11-18 20:37:35.174320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.299 [2024-11-18 20:37:35.174335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.299 [2024-11-18 20:37:35.174347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.299 [2024-11-18 20:37:35.174378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.299 qpair failed and we were unable to recover it. 00:36:23.300 [2024-11-18 20:37:35.184240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.300 [2024-11-18 20:37:35.184326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.300 [2024-11-18 20:37:35.184351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.300 [2024-11-18 20:37:35.184366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.300 [2024-11-18 20:37:35.184379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.300 [2024-11-18 20:37:35.184408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.300 qpair failed and we were unable to recover it. 00:36:23.300 [2024-11-18 20:37:35.194315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.300 [2024-11-18 20:37:35.194425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.300 [2024-11-18 20:37:35.194451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.300 [2024-11-18 20:37:35.194466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.300 [2024-11-18 20:37:35.194478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.300 [2024-11-18 20:37:35.194507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.300 qpair failed and we were unable to recover it. 00:36:23.300 [2024-11-18 20:37:35.204356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.300 [2024-11-18 20:37:35.204451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.300 [2024-11-18 20:37:35.204476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.300 [2024-11-18 20:37:35.204490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.300 [2024-11-18 20:37:35.204503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.300 [2024-11-18 20:37:35.204531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.300 qpair failed and we were unable to recover it. 00:36:23.300 [2024-11-18 20:37:35.214322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.300 [2024-11-18 20:37:35.214411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.300 [2024-11-18 20:37:35.214436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.300 [2024-11-18 20:37:35.214451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.300 [2024-11-18 20:37:35.214464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.300 [2024-11-18 20:37:35.214492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.300 qpair failed and we were unable to recover it. 00:36:23.300 [2024-11-18 20:37:35.224443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.300 [2024-11-18 20:37:35.224524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.300 [2024-11-18 20:37:35.224557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.300 [2024-11-18 20:37:35.224572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.300 [2024-11-18 20:37:35.224585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.300 [2024-11-18 20:37:35.224613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.300 qpair failed and we were unable to recover it. 00:36:23.300 [2024-11-18 20:37:35.234460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.300 [2024-11-18 20:37:35.234599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.300 [2024-11-18 20:37:35.234623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.300 [2024-11-18 20:37:35.234646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.300 [2024-11-18 20:37:35.234660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.300 [2024-11-18 20:37:35.234689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.300 qpair failed and we were unable to recover it. 00:36:23.300 [2024-11-18 20:37:35.244414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.300 [2024-11-18 20:37:35.244502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.300 [2024-11-18 20:37:35.244527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.300 [2024-11-18 20:37:35.244540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.300 [2024-11-18 20:37:35.244553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.300 [2024-11-18 20:37:35.244582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.300 qpair failed and we were unable to recover it. 00:36:23.300 [2024-11-18 20:37:35.254437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.300 [2024-11-18 20:37:35.254522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.300 [2024-11-18 20:37:35.254547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.300 [2024-11-18 20:37:35.254562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.300 [2024-11-18 20:37:35.254574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.300 [2024-11-18 20:37:35.254602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.300 qpair failed and we were unable to recover it. 00:36:23.300 [2024-11-18 20:37:35.264458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.300 [2024-11-18 20:37:35.264548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.300 [2024-11-18 20:37:35.264573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.300 [2024-11-18 20:37:35.264587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.300 [2024-11-18 20:37:35.264605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.300 [2024-11-18 20:37:35.264634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.300 qpair failed and we were unable to recover it. 00:36:23.300 [2024-11-18 20:37:35.274502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.300 [2024-11-18 20:37:35.274593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.300 [2024-11-18 20:37:35.274618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.300 [2024-11-18 20:37:35.274632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.300 [2024-11-18 20:37:35.274655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.300 [2024-11-18 20:37:35.274684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.300 qpair failed and we were unable to recover it. 00:36:23.300 [2024-11-18 20:37:35.284538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.300 [2024-11-18 20:37:35.284630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.300 [2024-11-18 20:37:35.284663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.300 [2024-11-18 20:37:35.284682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.300 [2024-11-18 20:37:35.284695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.300 [2024-11-18 20:37:35.284723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.300 qpair failed and we were unable to recover it. 00:36:23.300 [2024-11-18 20:37:35.294538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.300 [2024-11-18 20:37:35.294656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.300 [2024-11-18 20:37:35.294691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.300 [2024-11-18 20:37:35.294705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.300 [2024-11-18 20:37:35.294718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.300 [2024-11-18 20:37:35.294747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.300 qpair failed and we were unable to recover it. 00:36:23.300 [2024-11-18 20:37:35.304595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.300 [2024-11-18 20:37:35.304687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.300 [2024-11-18 20:37:35.304718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.300 [2024-11-18 20:37:35.304736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.300 [2024-11-18 20:37:35.304750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.300 [2024-11-18 20:37:35.304781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.300 qpair failed and we were unable to recover it. 00:36:23.560 [2024-11-18 20:37:35.314651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.560 [2024-11-18 20:37:35.314799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.560 [2024-11-18 20:37:35.314827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.560 [2024-11-18 20:37:35.314842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.560 [2024-11-18 20:37:35.314854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.560 [2024-11-18 20:37:35.314883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.560 qpair failed and we were unable to recover it. 00:36:23.560 [2024-11-18 20:37:35.324769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.560 [2024-11-18 20:37:35.324858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.560 [2024-11-18 20:37:35.324883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.560 [2024-11-18 20:37:35.324898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.560 [2024-11-18 20:37:35.324910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.560 [2024-11-18 20:37:35.324950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.560 qpair failed and we were unable to recover it. 00:36:23.561 [2024-11-18 20:37:35.334704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.561 [2024-11-18 20:37:35.334792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.561 [2024-11-18 20:37:35.334817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.561 [2024-11-18 20:37:35.334830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.561 [2024-11-18 20:37:35.334843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.561 [2024-11-18 20:37:35.334872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.561 qpair failed and we were unable to recover it. 00:36:23.561 [2024-11-18 20:37:35.344699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.561 [2024-11-18 20:37:35.344801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.561 [2024-11-18 20:37:35.344826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.561 [2024-11-18 20:37:35.344840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.561 [2024-11-18 20:37:35.344853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.561 [2024-11-18 20:37:35.344881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.561 qpair failed and we were unable to recover it. 00:36:23.561 [2024-11-18 20:37:35.354776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.561 [2024-11-18 20:37:35.354867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.561 [2024-11-18 20:37:35.354896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.561 [2024-11-18 20:37:35.354910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.561 [2024-11-18 20:37:35.354923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.561 [2024-11-18 20:37:35.354952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.561 qpair failed and we were unable to recover it. 00:36:23.561 [2024-11-18 20:37:35.364786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.561 [2024-11-18 20:37:35.364871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.561 [2024-11-18 20:37:35.364895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.561 [2024-11-18 20:37:35.364909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.561 [2024-11-18 20:37:35.364921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.561 [2024-11-18 20:37:35.364949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.561 qpair failed and we were unable to recover it. 00:36:23.561 [2024-11-18 20:37:35.374816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.561 [2024-11-18 20:37:35.374903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.561 [2024-11-18 20:37:35.374927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.561 [2024-11-18 20:37:35.374941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.561 [2024-11-18 20:37:35.374953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.561 [2024-11-18 20:37:35.374981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.561 qpair failed and we were unable to recover it. 00:36:23.561 [2024-11-18 20:37:35.384829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.561 [2024-11-18 20:37:35.384915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.561 [2024-11-18 20:37:35.384940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.561 [2024-11-18 20:37:35.384954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.561 [2024-11-18 20:37:35.384966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.561 [2024-11-18 20:37:35.384995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.561 qpair failed and we were unable to recover it. 00:36:23.561 [2024-11-18 20:37:35.394893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.561 [2024-11-18 20:37:35.394984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.561 [2024-11-18 20:37:35.395008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.561 [2024-11-18 20:37:35.395027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.561 [2024-11-18 20:37:35.395041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.561 [2024-11-18 20:37:35.395069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.561 qpair failed and we were unable to recover it. 00:36:23.561 [2024-11-18 20:37:35.404892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.561 [2024-11-18 20:37:35.404979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.561 [2024-11-18 20:37:35.405004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.561 [2024-11-18 20:37:35.405019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.561 [2024-11-18 20:37:35.405031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.561 [2024-11-18 20:37:35.405060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.561 qpair failed and we were unable to recover it. 00:36:23.561 [2024-11-18 20:37:35.414935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.561 [2024-11-18 20:37:35.415018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.561 [2024-11-18 20:37:35.415042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.561 [2024-11-18 20:37:35.415056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.561 [2024-11-18 20:37:35.415069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.561 [2024-11-18 20:37:35.415096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.561 qpair failed and we were unable to recover it. 00:36:23.561 [2024-11-18 20:37:35.424994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.561 [2024-11-18 20:37:35.425090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.561 [2024-11-18 20:37:35.425115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.561 [2024-11-18 20:37:35.425129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.561 [2024-11-18 20:37:35.425142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.561 [2024-11-18 20:37:35.425171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.561 qpair failed and we were unable to recover it. 00:36:23.561 [2024-11-18 20:37:35.435025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.561 [2024-11-18 20:37:35.435153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.561 [2024-11-18 20:37:35.435178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.561 [2024-11-18 20:37:35.435192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.561 [2024-11-18 20:37:35.435205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.561 [2024-11-18 20:37:35.435234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.561 qpair failed and we were unable to recover it. 00:36:23.561 [2024-11-18 20:37:35.445016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.561 [2024-11-18 20:37:35.445105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.561 [2024-11-18 20:37:35.445129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.561 [2024-11-18 20:37:35.445143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.561 [2024-11-18 20:37:35.445156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.561 [2024-11-18 20:37:35.445185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.561 qpair failed and we were unable to recover it. 00:36:23.562 [2024-11-18 20:37:35.455046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.562 [2024-11-18 20:37:35.455131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.562 [2024-11-18 20:37:35.455155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.562 [2024-11-18 20:37:35.455170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.562 [2024-11-18 20:37:35.455182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.562 [2024-11-18 20:37:35.455211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.562 qpair failed and we were unable to recover it. 00:36:23.562 [2024-11-18 20:37:35.465089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.562 [2024-11-18 20:37:35.465221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.562 [2024-11-18 20:37:35.465250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.562 [2024-11-18 20:37:35.465267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.562 [2024-11-18 20:37:35.465280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.562 [2024-11-18 20:37:35.465309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.562 qpair failed and we were unable to recover it. 00:36:23.562 [2024-11-18 20:37:35.475211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.562 [2024-11-18 20:37:35.475329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.562 [2024-11-18 20:37:35.475355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.562 [2024-11-18 20:37:35.475369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.562 [2024-11-18 20:37:35.475382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.562 [2024-11-18 20:37:35.475410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.562 qpair failed and we were unable to recover it. 00:36:23.562 [2024-11-18 20:37:35.485153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.562 [2024-11-18 20:37:35.485239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.562 [2024-11-18 20:37:35.485268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.562 [2024-11-18 20:37:35.485284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.562 [2024-11-18 20:37:35.485297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.562 [2024-11-18 20:37:35.485326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.562 qpair failed and we were unable to recover it. 00:36:23.562 [2024-11-18 20:37:35.495178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.562 [2024-11-18 20:37:35.495264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.562 [2024-11-18 20:37:35.495288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.562 [2024-11-18 20:37:35.495303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.562 [2024-11-18 20:37:35.495316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.562 [2024-11-18 20:37:35.495344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.562 qpair failed and we were unable to recover it. 00:36:23.562 [2024-11-18 20:37:35.505195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.562 [2024-11-18 20:37:35.505277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.562 [2024-11-18 20:37:35.505301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.562 [2024-11-18 20:37:35.505314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.562 [2024-11-18 20:37:35.505328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.562 [2024-11-18 20:37:35.505356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.562 qpair failed and we were unable to recover it. 00:36:23.562 [2024-11-18 20:37:35.515257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.562 [2024-11-18 20:37:35.515367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.562 [2024-11-18 20:37:35.515392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.562 [2024-11-18 20:37:35.515406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.562 [2024-11-18 20:37:35.515418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.562 [2024-11-18 20:37:35.515447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.562 qpair failed and we were unable to recover it. 00:36:23.562 [2024-11-18 20:37:35.525270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.562 [2024-11-18 20:37:35.525403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.562 [2024-11-18 20:37:35.525427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.562 [2024-11-18 20:37:35.525447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.562 [2024-11-18 20:37:35.525460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.562 [2024-11-18 20:37:35.525489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.562 qpair failed and we were unable to recover it. 00:36:23.562 [2024-11-18 20:37:35.535250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.562 [2024-11-18 20:37:35.535381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.562 [2024-11-18 20:37:35.535405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.562 [2024-11-18 20:37:35.535420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.562 [2024-11-18 20:37:35.535433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.562 [2024-11-18 20:37:35.535462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.562 qpair failed and we were unable to recover it. 00:36:23.562 [2024-11-18 20:37:35.545271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.562 [2024-11-18 20:37:35.545367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.562 [2024-11-18 20:37:35.545392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.562 [2024-11-18 20:37:35.545407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.562 [2024-11-18 20:37:35.545419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.562 [2024-11-18 20:37:35.545447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.562 qpair failed and we were unable to recover it. 00:36:23.562 [2024-11-18 20:37:35.555425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.562 [2024-11-18 20:37:35.555513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.562 [2024-11-18 20:37:35.555536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.562 [2024-11-18 20:37:35.555567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.562 [2024-11-18 20:37:35.555580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.562 [2024-11-18 20:37:35.555608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.562 qpair failed and we were unable to recover it. 00:36:23.562 [2024-11-18 20:37:35.565381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.562 [2024-11-18 20:37:35.565476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.562 [2024-11-18 20:37:35.565502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.562 [2024-11-18 20:37:35.565517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.563 [2024-11-18 20:37:35.565530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.563 [2024-11-18 20:37:35.565559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.563 qpair failed and we were unable to recover it. 00:36:23.822 [2024-11-18 20:37:35.575404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.822 [2024-11-18 20:37:35.575490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.822 [2024-11-18 20:37:35.575515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.822 [2024-11-18 20:37:35.575529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.822 [2024-11-18 20:37:35.575542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.822 [2024-11-18 20:37:35.575571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.822 qpair failed and we were unable to recover it. 00:36:23.822 [2024-11-18 20:37:35.585427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.822 [2024-11-18 20:37:35.585520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.822 [2024-11-18 20:37:35.585545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.822 [2024-11-18 20:37:35.585559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.822 [2024-11-18 20:37:35.585572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.823 [2024-11-18 20:37:35.585600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.823 qpair failed and we were unable to recover it. 00:36:23.823 [2024-11-18 20:37:35.595517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.823 [2024-11-18 20:37:35.595607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.823 [2024-11-18 20:37:35.595632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.823 [2024-11-18 20:37:35.595659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.823 [2024-11-18 20:37:35.595673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.823 [2024-11-18 20:37:35.595702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.823 qpair failed and we were unable to recover it. 00:36:23.823 [2024-11-18 20:37:35.605490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.823 [2024-11-18 20:37:35.605581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.823 [2024-11-18 20:37:35.605606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.823 [2024-11-18 20:37:35.605621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.823 [2024-11-18 20:37:35.605633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.823 [2024-11-18 20:37:35.605679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.823 qpair failed and we were unable to recover it. 00:36:23.823 [2024-11-18 20:37:35.615521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.823 [2024-11-18 20:37:35.615606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.823 [2024-11-18 20:37:35.615642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.823 [2024-11-18 20:37:35.615660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.823 [2024-11-18 20:37:35.615683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.823 [2024-11-18 20:37:35.615711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.823 qpair failed and we were unable to recover it. 00:36:23.823 [2024-11-18 20:37:35.625550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.823 [2024-11-18 20:37:35.625642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.823 [2024-11-18 20:37:35.625677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.823 [2024-11-18 20:37:35.625692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.823 [2024-11-18 20:37:35.625704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.823 [2024-11-18 20:37:35.625733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.823 qpair failed and we were unable to recover it. 00:36:23.823 [2024-11-18 20:37:35.635611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.823 [2024-11-18 20:37:35.635709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.823 [2024-11-18 20:37:35.635733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.823 [2024-11-18 20:37:35.635747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.823 [2024-11-18 20:37:35.635759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.823 [2024-11-18 20:37:35.635795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.823 qpair failed and we were unable to recover it. 00:36:23.823 [2024-11-18 20:37:35.645600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.823 [2024-11-18 20:37:35.645694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.823 [2024-11-18 20:37:35.645719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.823 [2024-11-18 20:37:35.645733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.823 [2024-11-18 20:37:35.645745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.823 [2024-11-18 20:37:35.645773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.823 qpair failed and we were unable to recover it. 00:36:23.823 [2024-11-18 20:37:35.655655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.823 [2024-11-18 20:37:35.655744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.823 [2024-11-18 20:37:35.655768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.823 [2024-11-18 20:37:35.655787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.823 [2024-11-18 20:37:35.655800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.823 [2024-11-18 20:37:35.655829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.823 qpair failed and we were unable to recover it. 00:36:23.823 [2024-11-18 20:37:35.665674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.823 [2024-11-18 20:37:35.665762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.823 [2024-11-18 20:37:35.665787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.823 [2024-11-18 20:37:35.665802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.823 [2024-11-18 20:37:35.665814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.823 [2024-11-18 20:37:35.665843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.823 qpair failed and we were unable to recover it. 00:36:23.823 [2024-11-18 20:37:35.675684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.823 [2024-11-18 20:37:35.675777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.823 [2024-11-18 20:37:35.675803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.823 [2024-11-18 20:37:35.675817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.823 [2024-11-18 20:37:35.675830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.823 [2024-11-18 20:37:35.675859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.823 qpair failed and we were unable to recover it. 00:36:23.823 [2024-11-18 20:37:35.685703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.823 [2024-11-18 20:37:35.685793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.823 [2024-11-18 20:37:35.685819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.823 [2024-11-18 20:37:35.685833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.823 [2024-11-18 20:37:35.685846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.823 [2024-11-18 20:37:35.685875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.823 qpair failed and we were unable to recover it. 00:36:23.823 [2024-11-18 20:37:35.695725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.823 [2024-11-18 20:37:35.695810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.823 [2024-11-18 20:37:35.695835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.823 [2024-11-18 20:37:35.695849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.823 [2024-11-18 20:37:35.695861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.824 [2024-11-18 20:37:35.695889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.824 qpair failed and we were unable to recover it. 00:36:23.824 [2024-11-18 20:37:35.705803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.824 [2024-11-18 20:37:35.705891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.824 [2024-11-18 20:37:35.705916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.824 [2024-11-18 20:37:35.705930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.824 [2024-11-18 20:37:35.705943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.824 [2024-11-18 20:37:35.705972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.824 qpair failed and we were unable to recover it. 00:36:23.824 [2024-11-18 20:37:35.715798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.824 [2024-11-18 20:37:35.715903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.824 [2024-11-18 20:37:35.715926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.824 [2024-11-18 20:37:35.715940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.824 [2024-11-18 20:37:35.715953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.824 [2024-11-18 20:37:35.715981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.824 qpair failed and we were unable to recover it. 00:36:23.824 [2024-11-18 20:37:35.725842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.824 [2024-11-18 20:37:35.725929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.824 [2024-11-18 20:37:35.725953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.824 [2024-11-18 20:37:35.725968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.824 [2024-11-18 20:37:35.725980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.824 [2024-11-18 20:37:35.726009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.824 qpair failed and we were unable to recover it. 00:36:23.824 [2024-11-18 20:37:35.735882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.824 [2024-11-18 20:37:35.735960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.824 [2024-11-18 20:37:35.735985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.824 [2024-11-18 20:37:35.736000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.824 [2024-11-18 20:37:35.736012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.824 [2024-11-18 20:37:35.736040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.824 qpair failed and we were unable to recover it. 00:36:23.824 [2024-11-18 20:37:35.745871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.824 [2024-11-18 20:37:35.745999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.824 [2024-11-18 20:37:35.746025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.824 [2024-11-18 20:37:35.746039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.824 [2024-11-18 20:37:35.746052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.824 [2024-11-18 20:37:35.746081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.824 qpair failed and we were unable to recover it. 00:36:23.824 [2024-11-18 20:37:35.755936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.824 [2024-11-18 20:37:35.756058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.824 [2024-11-18 20:37:35.756082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.824 [2024-11-18 20:37:35.756097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.824 [2024-11-18 20:37:35.756110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.824 [2024-11-18 20:37:35.756138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.824 qpair failed and we were unable to recover it. 00:36:23.824 [2024-11-18 20:37:35.765949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.824 [2024-11-18 20:37:35.766032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.824 [2024-11-18 20:37:35.766057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.824 [2024-11-18 20:37:35.766071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.824 [2024-11-18 20:37:35.766084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.824 [2024-11-18 20:37:35.766112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.824 qpair failed and we were unable to recover it. 00:36:23.824 [2024-11-18 20:37:35.775971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.824 [2024-11-18 20:37:35.776060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.824 [2024-11-18 20:37:35.776085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.824 [2024-11-18 20:37:35.776100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.824 [2024-11-18 20:37:35.776112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.824 [2024-11-18 20:37:35.776140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.824 qpair failed and we were unable to recover it. 00:36:23.824 [2024-11-18 20:37:35.786005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.824 [2024-11-18 20:37:35.786124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.824 [2024-11-18 20:37:35.786148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.824 [2024-11-18 20:37:35.786168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.824 [2024-11-18 20:37:35.786181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.824 [2024-11-18 20:37:35.786210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.824 qpair failed and we were unable to recover it. 00:36:23.824 [2024-11-18 20:37:35.796099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.824 [2024-11-18 20:37:35.796205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.824 [2024-11-18 20:37:35.796230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.824 [2024-11-18 20:37:35.796244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.824 [2024-11-18 20:37:35.796256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.824 [2024-11-18 20:37:35.796285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.824 qpair failed and we were unable to recover it. 00:36:23.824 [2024-11-18 20:37:35.806112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.824 [2024-11-18 20:37:35.806199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.824 [2024-11-18 20:37:35.806223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.824 [2024-11-18 20:37:35.806237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.824 [2024-11-18 20:37:35.806250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.824 [2024-11-18 20:37:35.806279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.824 qpair failed and we were unable to recover it. 00:36:23.825 [2024-11-18 20:37:35.816072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.825 [2024-11-18 20:37:35.816202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.825 [2024-11-18 20:37:35.816229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.825 [2024-11-18 20:37:35.816244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.825 [2024-11-18 20:37:35.816258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.825 [2024-11-18 20:37:35.816286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.825 qpair failed and we were unable to recover it. 00:36:23.825 [2024-11-18 20:37:35.826091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:23.825 [2024-11-18 20:37:35.826172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:23.825 [2024-11-18 20:37:35.826197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:23.825 [2024-11-18 20:37:35.826212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:23.825 [2024-11-18 20:37:35.826224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:23.825 [2024-11-18 20:37:35.826253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.825 qpair failed and we were unable to recover it. 00:36:24.084 [2024-11-18 20:37:35.836188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.084 [2024-11-18 20:37:35.836295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.084 [2024-11-18 20:37:35.836321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.084 [2024-11-18 20:37:35.836336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.084 [2024-11-18 20:37:35.836348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.084 [2024-11-18 20:37:35.836377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-11-18 20:37:35.846177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.084 [2024-11-18 20:37:35.846290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.084 [2024-11-18 20:37:35.846316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.084 [2024-11-18 20:37:35.846331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.084 [2024-11-18 20:37:35.846343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.084 [2024-11-18 20:37:35.846371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-11-18 20:37:35.856264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.084 [2024-11-18 20:37:35.856355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.084 [2024-11-18 20:37:35.856379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.084 [2024-11-18 20:37:35.856392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.084 [2024-11-18 20:37:35.856404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.084 [2024-11-18 20:37:35.856432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-11-18 20:37:35.866232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.084 [2024-11-18 20:37:35.866345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.084 [2024-11-18 20:37:35.866375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.084 [2024-11-18 20:37:35.866393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.084 [2024-11-18 20:37:35.866405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.084 [2024-11-18 20:37:35.866435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-11-18 20:37:35.876317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.084 [2024-11-18 20:37:35.876416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.084 [2024-11-18 20:37:35.876441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.084 [2024-11-18 20:37:35.876456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.084 [2024-11-18 20:37:35.876468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.084 [2024-11-18 20:37:35.876497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-11-18 20:37:35.886301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.084 [2024-11-18 20:37:35.886389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.084 [2024-11-18 20:37:35.886414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.084 [2024-11-18 20:37:35.886428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.084 [2024-11-18 20:37:35.886441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.084 [2024-11-18 20:37:35.886469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-11-18 20:37:35.896389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.084 [2024-11-18 20:37:35.896493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.084 [2024-11-18 20:37:35.896518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.084 [2024-11-18 20:37:35.896532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.084 [2024-11-18 20:37:35.896544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.084 [2024-11-18 20:37:35.896572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-11-18 20:37:35.906387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.085 [2024-11-18 20:37:35.906504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.085 [2024-11-18 20:37:35.906531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.085 [2024-11-18 20:37:35.906546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.085 [2024-11-18 20:37:35.906558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.085 [2024-11-18 20:37:35.906586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-11-18 20:37:35.916374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.085 [2024-11-18 20:37:35.916465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.085 [2024-11-18 20:37:35.916488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.085 [2024-11-18 20:37:35.916507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.085 [2024-11-18 20:37:35.916521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.085 [2024-11-18 20:37:35.916549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-11-18 20:37:35.926474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.085 [2024-11-18 20:37:35.926578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.085 [2024-11-18 20:37:35.926605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.085 [2024-11-18 20:37:35.926620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.085 [2024-11-18 20:37:35.926633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.085 [2024-11-18 20:37:35.926676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-11-18 20:37:35.936449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.085 [2024-11-18 20:37:35.936578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.085 [2024-11-18 20:37:35.936603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.085 [2024-11-18 20:37:35.936618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.085 [2024-11-18 20:37:35.936630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.085 [2024-11-18 20:37:35.936669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-11-18 20:37:35.946463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.085 [2024-11-18 20:37:35.946579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.085 [2024-11-18 20:37:35.946605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.085 [2024-11-18 20:37:35.946621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.085 [2024-11-18 20:37:35.946633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.085 [2024-11-18 20:37:35.946671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-11-18 20:37:35.956497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.085 [2024-11-18 20:37:35.956592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.085 [2024-11-18 20:37:35.956616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.085 [2024-11-18 20:37:35.956629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.085 [2024-11-18 20:37:35.956650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.085 [2024-11-18 20:37:35.956679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-11-18 20:37:35.966502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.085 [2024-11-18 20:37:35.966587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.085 [2024-11-18 20:37:35.966612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.085 [2024-11-18 20:37:35.966626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.085 [2024-11-18 20:37:35.966647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.085 [2024-11-18 20:37:35.966678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-11-18 20:37:35.976509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.085 [2024-11-18 20:37:35.976587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.085 [2024-11-18 20:37:35.976611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.085 [2024-11-18 20:37:35.976625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.085 [2024-11-18 20:37:35.976645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.085 [2024-11-18 20:37:35.976675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-11-18 20:37:35.986532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.085 [2024-11-18 20:37:35.986612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.085 [2024-11-18 20:37:35.986644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.085 [2024-11-18 20:37:35.986660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.085 [2024-11-18 20:37:35.986673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.085 [2024-11-18 20:37:35.986701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-11-18 20:37:35.996584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.085 [2024-11-18 20:37:35.996682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.085 [2024-11-18 20:37:35.996707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.085 [2024-11-18 20:37:35.996720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.085 [2024-11-18 20:37:35.996733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.085 [2024-11-18 20:37:35.996762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-11-18 20:37:36.006599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.085 [2024-11-18 20:37:36.006713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.085 [2024-11-18 20:37:36.006739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.085 [2024-11-18 20:37:36.006754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.085 [2024-11-18 20:37:36.006766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.085 [2024-11-18 20:37:36.006795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-11-18 20:37:36.016657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.086 [2024-11-18 20:37:36.016746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.086 [2024-11-18 20:37:36.016770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.086 [2024-11-18 20:37:36.016784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.086 [2024-11-18 20:37:36.016797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.086 [2024-11-18 20:37:36.016825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-11-18 20:37:36.026693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.086 [2024-11-18 20:37:36.026790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.086 [2024-11-18 20:37:36.026818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.086 [2024-11-18 20:37:36.026834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.086 [2024-11-18 20:37:36.026847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.086 [2024-11-18 20:37:36.026877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-11-18 20:37:36.036729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.086 [2024-11-18 20:37:36.036859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.086 [2024-11-18 20:37:36.036885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.086 [2024-11-18 20:37:36.036900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.086 [2024-11-18 20:37:36.036913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.086 [2024-11-18 20:37:36.036942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-11-18 20:37:36.046757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.086 [2024-11-18 20:37:36.046886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.086 [2024-11-18 20:37:36.046912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.086 [2024-11-18 20:37:36.046932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.086 [2024-11-18 20:37:36.046945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.086 [2024-11-18 20:37:36.046974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-11-18 20:37:36.056772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.086 [2024-11-18 20:37:36.056900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.086 [2024-11-18 20:37:36.056926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.086 [2024-11-18 20:37:36.056940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.086 [2024-11-18 20:37:36.056953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1671b40 00:36:24.086 [2024-11-18 20:37:36.056981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-11-18 20:37:36.066808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.086 [2024-11-18 20:37:36.066893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.086 [2024-11-18 20:37:36.066924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.086 [2024-11-18 20:37:36.066939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.086 [2024-11-18 20:37:36.066952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:24.086 [2024-11-18 20:37:36.066984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-11-18 20:37:36.076887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.086 [2024-11-18 20:37:36.076986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.086 [2024-11-18 20:37:36.077011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.086 [2024-11-18 20:37:36.077026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.086 [2024-11-18 20:37:36.077038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe698000b90 00:36:24.086 [2024-11-18 20:37:36.077068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-11-18 20:37:36.086853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.086 [2024-11-18 20:37:36.086945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.086 [2024-11-18 20:37:36.086977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.086 [2024-11-18 20:37:36.086993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.086 [2024-11-18 20:37:36.087006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe694000b90 00:36:24.086 [2024-11-18 20:37:36.087047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.345 [2024-11-18 20:37:36.096906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:24.345 [2024-11-18 20:37:36.097041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:24.345 [2024-11-18 20:37:36.097070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:24.345 [2024-11-18 20:37:36.097086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:24.345 [2024-11-18 20:37:36.097098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe694000b90 00:36:24.345 [2024-11-18 20:37:36.097129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:24.345 qpair failed and we were unable to recover it. 00:36:24.345 Controller properly reset. 00:36:24.345 Initializing NVMe Controllers 00:36:24.345 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:24.345 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:24.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:24.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:24.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:24.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:24.345 Initialization complete. Launching workers. 00:36:24.345 Starting thread on core 1 00:36:24.345 Starting thread on core 2 00:36:24.345 Starting thread on core 3 00:36:24.345 Starting thread on core 0 00:36:24.345 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:24.345 00:36:24.345 real 0m10.802s 00:36:24.345 user 0m19.678s 00:36:24.345 sys 0m5.120s 00:36:24.345 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.345 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:24.345 ************************************ 00:36:24.345 END TEST nvmf_target_disconnect_tc2 00:36:24.345 ************************************ 00:36:24.345 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:24.345 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:24.345 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:24.345 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:24.345 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:24.345 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:24.345 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:24.345 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:24.345 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:24.345 rmmod nvme_tcp 00:36:24.345 rmmod nvme_fabrics 00:36:24.345 rmmod nvme_keyring 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 405018 ']' 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 405018 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 405018 ']' 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 405018 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 405018 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 405018' 00:36:24.603 killing process with pid 405018 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 405018 00:36:24.603 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 405018 00:36:24.862 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:24.862 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:24.862 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:24.862 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:24.862 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:24.862 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:24.862 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:24.862 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:24.862 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:24.862 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.862 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.862 20:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.769 20:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:26.769 00:36:26.769 real 0m15.694s 00:36:26.769 user 0m46.142s 00:36:26.769 sys 0m7.157s 00:36:26.769 20:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:26.769 20:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:26.769 ************************************ 00:36:26.769 END TEST nvmf_target_disconnect 00:36:26.769 ************************************ 00:36:26.769 20:37:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:26.769 00:36:26.769 real 6m43.122s 00:36:26.769 user 17m12.098s 00:36:26.769 sys 1m26.733s 00:36:26.769 20:37:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:26.769 20:37:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.769 ************************************ 00:36:26.769 END TEST nvmf_host 00:36:26.769 ************************************ 00:36:26.769 20:37:38 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:26.769 20:37:38 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:26.769 20:37:38 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:26.769 20:37:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:26.769 20:37:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:26.769 20:37:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:26.769 ************************************ 00:36:26.769 START TEST nvmf_target_core_interrupt_mode 00:36:26.769 ************************************ 00:36:26.769 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:27.028 * Looking for test storage... 00:36:27.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:27.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.028 --rc genhtml_branch_coverage=1 00:36:27.028 --rc genhtml_function_coverage=1 00:36:27.028 --rc genhtml_legend=1 00:36:27.028 --rc geninfo_all_blocks=1 00:36:27.028 --rc geninfo_unexecuted_blocks=1 00:36:27.028 00:36:27.028 ' 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:27.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.028 --rc genhtml_branch_coverage=1 00:36:27.028 --rc genhtml_function_coverage=1 00:36:27.028 --rc genhtml_legend=1 00:36:27.028 --rc geninfo_all_blocks=1 00:36:27.028 --rc geninfo_unexecuted_blocks=1 00:36:27.028 00:36:27.028 ' 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:27.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.028 --rc genhtml_branch_coverage=1 00:36:27.028 --rc genhtml_function_coverage=1 00:36:27.028 --rc genhtml_legend=1 00:36:27.028 --rc geninfo_all_blocks=1 00:36:27.028 --rc geninfo_unexecuted_blocks=1 00:36:27.028 00:36:27.028 ' 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:27.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.028 --rc genhtml_branch_coverage=1 00:36:27.028 --rc genhtml_function_coverage=1 00:36:27.028 --rc genhtml_legend=1 00:36:27.028 --rc geninfo_all_blocks=1 00:36:27.028 --rc geninfo_unexecuted_blocks=1 00:36:27.028 00:36:27.028 ' 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:27.028 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:27.029 ************************************ 00:36:27.029 START TEST nvmf_abort 00:36:27.029 ************************************ 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:27.029 * Looking for test storage... 00:36:27.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:36:27.029 20:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:27.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.289 --rc genhtml_branch_coverage=1 00:36:27.289 --rc genhtml_function_coverage=1 00:36:27.289 --rc genhtml_legend=1 00:36:27.289 --rc geninfo_all_blocks=1 00:36:27.289 --rc geninfo_unexecuted_blocks=1 00:36:27.289 00:36:27.289 ' 00:36:27.289 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:27.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.289 --rc genhtml_branch_coverage=1 00:36:27.289 --rc genhtml_function_coverage=1 00:36:27.289 --rc genhtml_legend=1 00:36:27.289 --rc geninfo_all_blocks=1 00:36:27.289 --rc geninfo_unexecuted_blocks=1 00:36:27.289 00:36:27.289 ' 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:27.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.290 --rc genhtml_branch_coverage=1 00:36:27.290 --rc genhtml_function_coverage=1 00:36:27.290 --rc genhtml_legend=1 00:36:27.290 --rc geninfo_all_blocks=1 00:36:27.290 --rc geninfo_unexecuted_blocks=1 00:36:27.290 00:36:27.290 ' 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:27.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.290 --rc genhtml_branch_coverage=1 00:36:27.290 --rc genhtml_function_coverage=1 00:36:27.290 --rc genhtml_legend=1 00:36:27.290 --rc geninfo_all_blocks=1 00:36:27.290 --rc geninfo_unexecuted_blocks=1 00:36:27.290 00:36:27.290 ' 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:27.290 20:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:29.197 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:29.197 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:29.197 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:29.198 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:29.198 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:29.198 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:29.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:29.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:36:29.458 00:36:29.458 --- 10.0.0.2 ping statistics --- 00:36:29.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.458 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:29.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:29.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:36:29.458 00:36:29.458 --- 10.0.0.1 ping statistics --- 00:36:29.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.458 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=407831 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 407831 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 407831 ']' 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:29.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:29.458 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.458 [2024-11-18 20:37:41.350250] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:29.458 [2024-11-18 20:37:41.351324] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:36:29.458 [2024-11-18 20:37:41.351395] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:29.458 [2024-11-18 20:37:41.423016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:29.717 [2024-11-18 20:37:41.468900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:29.717 [2024-11-18 20:37:41.468956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:29.717 [2024-11-18 20:37:41.468970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:29.717 [2024-11-18 20:37:41.468981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:29.717 [2024-11-18 20:37:41.468991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:29.717 [2024-11-18 20:37:41.470377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:29.717 [2024-11-18 20:37:41.470438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:29.717 [2024-11-18 20:37:41.470441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.717 [2024-11-18 20:37:41.552737] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:29.717 [2024-11-18 20:37:41.552939] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:29.717 [2024-11-18 20:37:41.552956] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:29.717 [2024-11-18 20:37:41.553217] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.717 [2024-11-18 20:37:41.611127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.717 Malloc0 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.717 Delay0 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.717 [2024-11-18 20:37:41.683316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.717 20:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:29.978 [2024-11-18 20:37:41.750779] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:31.886 Initializing NVMe Controllers 00:36:31.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:31.886 controller IO queue size 128 less than required 00:36:31.886 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:31.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:31.886 Initialization complete. Launching workers. 00:36:31.886 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 29388 00:36:31.886 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29449, failed to submit 66 00:36:31.886 success 29388, unsuccessful 61, failed 0 00:36:31.886 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:31.886 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.886 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.886 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.886 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:31.886 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:31.886 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:31.886 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:31.886 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:31.886 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:31.886 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:31.886 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:31.886 rmmod nvme_tcp 00:36:32.146 rmmod nvme_fabrics 00:36:32.146 rmmod nvme_keyring 00:36:32.146 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:32.146 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:32.146 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:32.146 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 407831 ']' 00:36:32.146 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 407831 00:36:32.146 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 407831 ']' 00:36:32.146 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 407831 00:36:32.146 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:32.147 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:32.147 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 407831 00:36:32.147 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:32.147 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:32.147 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 407831' 00:36:32.147 killing process with pid 407831 00:36:32.147 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 407831 00:36:32.147 20:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 407831 00:36:32.407 20:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:32.407 20:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:32.407 20:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:32.407 20:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:32.407 20:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:32.407 20:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:32.407 20:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:32.407 20:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:32.407 20:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:32.407 20:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:32.407 20:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:32.407 20:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.314 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:34.314 00:36:34.314 real 0m7.337s 00:36:34.314 user 0m9.337s 00:36:34.314 sys 0m2.883s 00:36:34.314 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:34.314 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:34.314 ************************************ 00:36:34.314 END TEST nvmf_abort 00:36:34.314 ************************************ 00:36:34.314 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:34.314 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:34.314 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:34.314 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:34.314 ************************************ 00:36:34.314 START TEST nvmf_ns_hotplug_stress 00:36:34.314 ************************************ 00:36:34.314 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:34.573 * Looking for test storage... 00:36:34.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:34.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.573 --rc genhtml_branch_coverage=1 00:36:34.573 --rc genhtml_function_coverage=1 00:36:34.573 --rc genhtml_legend=1 00:36:34.573 --rc geninfo_all_blocks=1 00:36:34.573 --rc geninfo_unexecuted_blocks=1 00:36:34.573 00:36:34.573 ' 00:36:34.573 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:34.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.573 --rc genhtml_branch_coverage=1 00:36:34.573 --rc genhtml_function_coverage=1 00:36:34.573 --rc genhtml_legend=1 00:36:34.574 --rc geninfo_all_blocks=1 00:36:34.574 --rc geninfo_unexecuted_blocks=1 00:36:34.574 00:36:34.574 ' 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:34.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.574 --rc genhtml_branch_coverage=1 00:36:34.574 --rc genhtml_function_coverage=1 00:36:34.574 --rc genhtml_legend=1 00:36:34.574 --rc geninfo_all_blocks=1 00:36:34.574 --rc geninfo_unexecuted_blocks=1 00:36:34.574 00:36:34.574 ' 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:34.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.574 --rc genhtml_branch_coverage=1 00:36:34.574 --rc genhtml_function_coverage=1 00:36:34.574 --rc genhtml_legend=1 00:36:34.574 --rc geninfo_all_blocks=1 00:36:34.574 --rc geninfo_unexecuted_blocks=1 00:36:34.574 00:36:34.574 ' 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:34.574 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:34.575 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.575 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:34.575 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:34.575 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:34.575 20:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:37.108 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:37.108 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:37.108 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:37.108 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:37.108 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:37.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:37.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:36:37.109 00:36:37.109 --- 10.0.0.2 ping statistics --- 00:36:37.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:37.109 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:37.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:37.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:36:37.109 00:36:37.109 --- 10.0.0.1 ping statistics --- 00:36:37.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:37.109 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=410045 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 410045 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 410045 ']' 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:37.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:37.109 [2024-11-18 20:37:48.767625] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:37.109 [2024-11-18 20:37:48.768748] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:36:37.109 [2024-11-18 20:37:48.768825] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:37.109 [2024-11-18 20:37:48.839367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:37.109 [2024-11-18 20:37:48.882080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:37.109 [2024-11-18 20:37:48.882135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:37.109 [2024-11-18 20:37:48.882158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:37.109 [2024-11-18 20:37:48.882174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:37.109 [2024-11-18 20:37:48.882184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:37.109 [2024-11-18 20:37:48.883667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:37.109 [2024-11-18 20:37:48.883761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:37.109 [2024-11-18 20:37:48.883764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:37.109 [2024-11-18 20:37:48.962796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:37.109 [2024-11-18 20:37:48.962994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:37.109 [2024-11-18 20:37:48.962996] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:37.109 [2024-11-18 20:37:48.963286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:37.109 20:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:37.110 20:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:37.110 20:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:37.110 20:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:37.368 [2024-11-18 20:37:49.264516] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:37.368 20:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:37.627 20:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:37.885 [2024-11-18 20:37:49.813676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:37.885 20:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:38.150 20:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:38.412 Malloc0 00:36:38.412 20:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:38.672 Delay0 00:36:38.931 20:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.189 20:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:39.447 NULL1 00:36:39.447 20:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:39.705 20:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=410455 00:36:39.705 20:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:39.705 20:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:39.705 20:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.963 20:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.222 20:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:40.222 20:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:40.481 true 00:36:40.481 20:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:40.481 20:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.739 20:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.998 20:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:40.998 20:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:41.256 true 00:36:41.256 20:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:41.256 20:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.514 20:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.772 20:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:41.772 20:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:42.030 true 00:36:42.030 20:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:42.030 20:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.964 Read completed with error (sct=0, sc=11) 00:36:42.964 20:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.223 20:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:43.223 20:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:43.481 true 00:36:43.481 20:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:43.481 20:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.739 20:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.997 20:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:43.997 20:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:44.255 true 00:36:44.255 20:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:44.256 20:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.514 20:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.772 20:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:44.772 20:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:45.030 true 00:36:45.030 20:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:45.030 20:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.410 20:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.410 20:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:46.410 20:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:46.669 true 00:36:46.669 20:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:46.669 20:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.927 20:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.185 20:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:47.185 20:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:47.442 true 00:36:47.442 20:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:47.442 20:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.699 20:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.956 20:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:47.956 20:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:48.214 true 00:36:48.214 20:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:48.214 20:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.588 20:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:49.588 20:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:49.588 20:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:49.846 true 00:36:49.846 20:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:49.846 20:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.105 20:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.363 20:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:50.363 20:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:50.622 true 00:36:50.622 20:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:50.622 20:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.881 20:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.139 20:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:51.139 20:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:51.397 true 00:36:51.397 20:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:51.397 20:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.335 20:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.593 20:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:52.593 20:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:52.852 true 00:36:52.852 20:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:52.852 20:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.110 20:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.369 20:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:53.369 20:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:53.629 true 00:36:53.888 20:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:53.888 20:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.456 20:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.023 20:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:55.023 20:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:55.023 true 00:36:55.023 20:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:55.023 20:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.289 20:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.595 20:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:55.595 20:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:55.890 true 00:36:55.890 20:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:55.890 20:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.149 20:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.407 20:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:56.407 20:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:56.665 true 00:36:56.665 20:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:56.665 20:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.039 20:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:58.039 20:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:58.039 20:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:58.297 true 00:36:58.297 20:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:58.297 20:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.555 20:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:58.814 20:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:58.814 20:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:59.072 true 00:36:59.072 20:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:59.072 20:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.330 20:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.898 20:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:59.898 20:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:59.898 true 00:36:59.898 20:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:36:59.898 20:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.278 20:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:01.278 20:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:01.278 20:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:01.536 true 00:37:01.536 20:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:37:01.536 20:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.794 20:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.052 20:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:02.052 20:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:02.310 true 00:37:02.310 20:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:37:02.310 20:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:03.247 20:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:03.505 20:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:03.505 20:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:03.763 true 00:37:03.763 20:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:37:03.763 20:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.022 20:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:04.280 20:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:04.280 20:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:04.538 true 00:37:04.538 20:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:37:04.538 20:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.797 20:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:05.055 20:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:05.055 20:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:05.313 true 00:37:05.583 20:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:37:05.583 20:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:06.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:06.520 20:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:06.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:06.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:06.520 20:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:06.520 20:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:06.780 true 00:37:07.038 20:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:37:07.038 20:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:07.295 20:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:07.552 20:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:07.552 20:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:07.810 true 00:37:07.810 20:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:37:07.810 20:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:08.375 20:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:08.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:08.942 20:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:08.942 20:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:08.942 true 00:37:08.942 20:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:37:08.942 20:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.200 20:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:09.458 20:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:37:09.458 20:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:37:09.715 true 00:37:09.973 20:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:37:09.973 20:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:10.907 Initializing NVMe Controllers 00:37:10.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:10.907 Controller IO queue size 128, less than required. 00:37:10.907 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:10.907 Controller IO queue size 128, less than required. 00:37:10.907 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:10.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:10.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:10.907 Initialization complete. Launching workers. 00:37:10.907 ======================================================== 00:37:10.907 Latency(us) 00:37:10.907 Device Information : IOPS MiB/s Average min max 00:37:10.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 363.40 0.18 140907.52 3416.61 1013810.61 00:37:10.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7925.00 3.87 16103.32 1863.92 450003.75 00:37:10.907 ======================================================== 00:37:10.907 Total : 8288.40 4.05 21575.28 1863.92 1013810.61 00:37:10.907 00:37:10.907 20:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:10.907 20:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:37:10.907 20:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:37:11.165 true 00:37:11.165 20:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410455 00:37:11.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (410455) - No such process 00:37:11.165 20:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 410455 00:37:11.165 20:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:11.424 20:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:11.682 20:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:11.682 20:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:11.682 20:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:11.682 20:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:11.682 20:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:11.940 null0 00:37:11.940 20:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:11.940 20:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:11.940 20:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:12.198 null1 00:37:12.198 20:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:12.198 20:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:12.198 20:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:12.456 null2 00:37:12.715 20:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:12.715 20:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:12.715 20:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:12.974 null3 00:37:12.974 20:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:12.974 20:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:12.974 20:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:13.236 null4 00:37:13.236 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:13.236 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:13.236 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:13.494 null5 00:37:13.495 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:13.495 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:13.495 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:13.753 null6 00:37:13.753 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:13.753 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:13.753 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:14.012 null7 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:14.012 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 414481 414482 414484 414486 414488 414490 414492 414494 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.013 20:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:14.271 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:14.271 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:14.271 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.271 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:14.271 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:14.271 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:14.271 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:14.271 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.529 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:14.788 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:14.788 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:14.788 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:14.788 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.788 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:14.788 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:14.788 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:14.788 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:15.046 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.046 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.046 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:15.046 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.046 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.046 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:15.046 20:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.046 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.046 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:15.046 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.046 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.047 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:15.047 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.047 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.047 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:15.047 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.047 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.047 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:15.047 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.047 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.047 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:15.047 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.047 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.047 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:15.305 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:15.305 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:15.305 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:15.305 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:15.305 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:15.305 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:15.305 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:15.305 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.873 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:16.132 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:16.132 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.132 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:16.132 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:16.132 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:16.132 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:16.132 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:16.132 20:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:16.390 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.391 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.391 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:16.391 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.391 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.391 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:16.391 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.391 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.391 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:16.649 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:16.649 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:16.649 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:16.649 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:16.649 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.649 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:16.649 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:16.649 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.907 20:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:17.165 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:17.165 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:17.165 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:17.165 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:17.165 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:17.165 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:17.165 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:17.165 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.422 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.422 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.422 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:17.422 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.422 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.423 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:17.988 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:17.988 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:17.988 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:17.988 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:17.988 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:17.988 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.988 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:17.988 20:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.246 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:18.504 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:18.504 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:18.504 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:18.504 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:18.504 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:18.504 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:18.504 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:18.504 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.762 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:19.021 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:19.021 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:19.021 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:19.021 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:19.021 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:19.021 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:19.021 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:19.021 20:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.280 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:19.539 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:19.539 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:19.539 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:19.539 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:19.539 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:19.539 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:19.539 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:19.797 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:20.055 rmmod nvme_tcp 00:37:20.055 rmmod nvme_fabrics 00:37:20.055 rmmod nvme_keyring 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 410045 ']' 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 410045 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 410045 ']' 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 410045 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 410045 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:20.055 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 410045' 00:37:20.055 killing process with pid 410045 00:37:20.056 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 410045 00:37:20.056 20:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 410045 00:37:20.314 20:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:20.314 20:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:20.314 20:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:20.314 20:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:20.314 20:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:20.314 20:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:20.314 20:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:20.314 20:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:20.314 20:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:20.314 20:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.314 20:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:20.314 20:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:22.854 00:37:22.854 real 0m47.933s 00:37:22.854 user 3m21.640s 00:37:22.854 sys 0m22.253s 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:22.854 ************************************ 00:37:22.854 END TEST nvmf_ns_hotplug_stress 00:37:22.854 ************************************ 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:22.854 ************************************ 00:37:22.854 START TEST nvmf_delete_subsystem 00:37:22.854 ************************************ 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:22.854 * Looking for test storage... 00:37:22.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:22.854 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.855 --rc genhtml_branch_coverage=1 00:37:22.855 --rc genhtml_function_coverage=1 00:37:22.855 --rc genhtml_legend=1 00:37:22.855 --rc geninfo_all_blocks=1 00:37:22.855 --rc geninfo_unexecuted_blocks=1 00:37:22.855 00:37:22.855 ' 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.855 --rc genhtml_branch_coverage=1 00:37:22.855 --rc genhtml_function_coverage=1 00:37:22.855 --rc genhtml_legend=1 00:37:22.855 --rc geninfo_all_blocks=1 00:37:22.855 --rc geninfo_unexecuted_blocks=1 00:37:22.855 00:37:22.855 ' 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.855 --rc genhtml_branch_coverage=1 00:37:22.855 --rc genhtml_function_coverage=1 00:37:22.855 --rc genhtml_legend=1 00:37:22.855 --rc geninfo_all_blocks=1 00:37:22.855 --rc geninfo_unexecuted_blocks=1 00:37:22.855 00:37:22.855 ' 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.855 --rc genhtml_branch_coverage=1 00:37:22.855 --rc genhtml_function_coverage=1 00:37:22.855 --rc genhtml_legend=1 00:37:22.855 --rc geninfo_all_blocks=1 00:37:22.855 --rc geninfo_unexecuted_blocks=1 00:37:22.855 00:37:22.855 ' 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:22.855 20:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:24.865 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:24.865 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:24.865 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:24.865 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:24.865 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:24.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:24.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:37:24.866 00:37:24.866 --- 10.0.0.2 ping statistics --- 00:37:24.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.866 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:24.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:24.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:37:24.866 00:37:24.866 --- 10.0.0.1 ping statistics --- 00:37:24.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.866 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=417348 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 417348 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 417348 ']' 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:24.866 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:24.866 [2024-11-18 20:38:36.765907] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:24.866 [2024-11-18 20:38:36.767034] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:37:24.866 [2024-11-18 20:38:36.767111] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:24.866 [2024-11-18 20:38:36.838216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:25.125 [2024-11-18 20:38:36.884524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:25.125 [2024-11-18 20:38:36.884574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:25.125 [2024-11-18 20:38:36.884597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:25.125 [2024-11-18 20:38:36.884609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:25.125 [2024-11-18 20:38:36.884619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:25.125 [2024-11-18 20:38:36.887660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:25.125 [2024-11-18 20:38:36.887673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.125 [2024-11-18 20:38:36.971709] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:25.125 [2024-11-18 20:38:36.971740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:25.126 [2024-11-18 20:38:36.972000] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:25.126 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:25.126 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:25.126 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:25.126 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:25.126 20:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:25.126 [2024-11-18 20:38:37.024339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:25.126 [2024-11-18 20:38:37.040578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:25.126 NULL1 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:25.126 Delay0 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=417384 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:25.126 20:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:25.126 [2024-11-18 20:38:37.115162] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:27.654 20:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:27.654 20:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.654 20:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 [2024-11-18 20:38:39.315007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224b40 is same with the state(6) to be set 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 [2024-11-18 20:38:39.317335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12243f0 is same with the state(6) to be set 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 starting I/O failed: -6 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 [2024-11-18 20:38:39.318042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f44e4000c40 is same with the state(6) to be set 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Write completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.654 Read completed with error (sct=0, sc=8) 00:37:27.655 Write completed with error (sct=0, sc=8) 00:37:27.655 Write completed with error (sct=0, sc=8) 00:37:27.655 Write completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Write completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Write completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Write completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Write completed with error (sct=0, sc=8) 00:37:27.655 Write completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:27.655 Read completed with error (sct=0, sc=8) 00:37:28.588 [2024-11-18 20:38:40.297088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:37:28.588 Write completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Write completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Write completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Write completed with error (sct=0, sc=8) 00:37:28.588 [2024-11-18 20:38:40.319379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224810 is same with the state(6) to be set 00:37:28.588 Write completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.588 Write completed with error (sct=0, sc=8) 00:37:28.588 Read completed with error (sct=0, sc=8) 00:37:28.589 Write completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Write completed with error (sct=0, sc=8) 00:37:28.589 Write completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Write completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 [2024-11-18 20:38:40.319540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224e70 is same with the state(6) to be set 00:37:28.589 Write completed with error (sct=0, sc=8) 00:37:28.589 Write completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Write completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Write completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 [2024-11-18 20:38:40.319685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f44e400d680 is same with the state(6) to be set 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Write completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Write completed with error (sct=0, sc=8) 00:37:28.589 Write completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 Write completed with error (sct=0, sc=8) 00:37:28.589 Write completed with error (sct=0, sc=8) 00:37:28.589 Read completed with error (sct=0, sc=8) 00:37:28.589 [2024-11-18 20:38:40.320761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f44e400d020 is same with the state(6) to be set 00:37:28.589 Initializing NVMe Controllers 00:37:28.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:28.589 Controller IO queue size 128, less than required. 00:37:28.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:28.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:28.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:28.589 Initialization complete. Launching workers. 00:37:28.589 ======================================================== 00:37:28.589 Latency(us) 00:37:28.589 Device Information : IOPS MiB/s Average min max 00:37:28.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.02 0.08 924699.77 2355.12 1044460.57 00:37:28.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.58 0.07 946723.18 344.41 1012701.84 00:37:28.589 ======================================================== 00:37:28.589 Total : 307.60 0.15 935409.06 344.41 1044460.57 00:37:28.589 00:37:28.589 [2024-11-18 20:38:40.321278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12325b0 (9): Bad file descriptor 00:37:28.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:28.589 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.589 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:28.589 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 417384 00:37:28.589 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 417384 00:37:28.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (417384) - No such process 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 417384 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 417384 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 417384 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:28.848 [2024-11-18 20:38:40.844571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.848 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:29.107 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.107 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=417779 00:37:29.107 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:29.107 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417779 00:37:29.107 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:29.107 20:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:29.107 [2024-11-18 20:38:40.909023] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:29.364 20:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:29.364 20:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417779 00:37:29.364 20:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:29.929 20:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:29.929 20:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417779 00:37:29.929 20:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:30.495 20:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:30.495 20:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417779 00:37:30.495 20:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:31.059 20:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:31.059 20:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417779 00:37:31.059 20:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:31.625 20:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:31.625 20:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417779 00:37:31.625 20:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:31.882 20:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:31.882 20:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417779 00:37:31.882 20:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:32.140 Initializing NVMe Controllers 00:37:32.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:32.140 Controller IO queue size 128, less than required. 00:37:32.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:32.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:32.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:32.140 Initialization complete. Launching workers. 00:37:32.140 ======================================================== 00:37:32.140 Latency(us) 00:37:32.140 Device Information : IOPS MiB/s Average min max 00:37:32.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003982.90 1000201.91 1041151.90 00:37:32.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005308.08 1000169.89 1042639.67 00:37:32.140 ======================================================== 00:37:32.140 Total : 256.00 0.12 1004645.49 1000169.89 1042639.67 00:37:32.140 00:37:32.397 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:32.397 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417779 00:37:32.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (417779) - No such process 00:37:32.397 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 417779 00:37:32.397 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:32.397 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:32.397 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:32.397 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:32.397 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:32.397 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:32.397 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:32.397 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:32.397 rmmod nvme_tcp 00:37:32.397 rmmod nvme_fabrics 00:37:32.655 rmmod nvme_keyring 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 417348 ']' 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 417348 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 417348 ']' 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 417348 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 417348 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 417348' 00:37:32.655 killing process with pid 417348 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 417348 00:37:32.655 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 417348 00:37:32.915 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:32.915 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:32.915 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:32.915 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:32.915 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:32.915 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:32.915 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:32.916 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:32.916 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:32.916 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:32.916 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:32.916 20:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:34.827 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:34.827 00:37:34.827 real 0m12.429s 00:37:34.827 user 0m24.784s 00:37:34.827 sys 0m3.845s 00:37:34.827 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:34.827 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:34.827 ************************************ 00:37:34.827 END TEST nvmf_delete_subsystem 00:37:34.827 ************************************ 00:37:34.827 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:34.827 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:34.827 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:34.827 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:34.827 ************************************ 00:37:34.827 START TEST nvmf_host_management 00:37:34.827 ************************************ 00:37:34.827 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:34.827 * Looking for test storage... 00:37:34.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:34.827 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:34.827 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:37:34.827 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:35.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.087 --rc genhtml_branch_coverage=1 00:37:35.087 --rc genhtml_function_coverage=1 00:37:35.087 --rc genhtml_legend=1 00:37:35.087 --rc geninfo_all_blocks=1 00:37:35.087 --rc geninfo_unexecuted_blocks=1 00:37:35.087 00:37:35.087 ' 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:35.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.087 --rc genhtml_branch_coverage=1 00:37:35.087 --rc genhtml_function_coverage=1 00:37:35.087 --rc genhtml_legend=1 00:37:35.087 --rc geninfo_all_blocks=1 00:37:35.087 --rc geninfo_unexecuted_blocks=1 00:37:35.087 00:37:35.087 ' 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:35.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.087 --rc genhtml_branch_coverage=1 00:37:35.087 --rc genhtml_function_coverage=1 00:37:35.087 --rc genhtml_legend=1 00:37:35.087 --rc geninfo_all_blocks=1 00:37:35.087 --rc geninfo_unexecuted_blocks=1 00:37:35.087 00:37:35.087 ' 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:35.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.087 --rc genhtml_branch_coverage=1 00:37:35.087 --rc genhtml_function_coverage=1 00:37:35.087 --rc genhtml_legend=1 00:37:35.087 --rc geninfo_all_blocks=1 00:37:35.087 --rc geninfo_unexecuted_blocks=1 00:37:35.087 00:37:35.087 ' 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:35.087 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:35.088 20:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:37.622 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:37.622 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:37.622 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:37.622 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:37.622 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:37.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:37.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:37:37.623 00:37:37.623 --- 10.0.0.2 ping statistics --- 00:37:37.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.623 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:37.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:37.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:37:37.623 00:37:37.623 --- 10.0.0.1 ping statistics --- 00:37:37.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.623 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=420235 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 420235 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 420235 ']' 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:37.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:37.623 [2024-11-18 20:38:49.308765] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:37.623 [2024-11-18 20:38:49.309835] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:37:37.623 [2024-11-18 20:38:49.309911] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:37.623 [2024-11-18 20:38:49.380288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:37.623 [2024-11-18 20:38:49.426118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:37.623 [2024-11-18 20:38:49.426193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:37.623 [2024-11-18 20:38:49.426218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:37.623 [2024-11-18 20:38:49.426228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:37.623 [2024-11-18 20:38:49.426238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:37.623 [2024-11-18 20:38:49.427794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:37.623 [2024-11-18 20:38:49.427856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:37.623 [2024-11-18 20:38:49.427923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:37.623 [2024-11-18 20:38:49.427925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:37.623 [2024-11-18 20:38:49.510325] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:37.623 [2024-11-18 20:38:49.510488] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:37.623 [2024-11-18 20:38:49.510752] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:37.623 [2024-11-18 20:38:49.511312] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:37.623 [2024-11-18 20:38:49.511527] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:37.623 [2024-11-18 20:38:49.564604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.623 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:37.623 Malloc0 00:37:37.882 [2024-11-18 20:38:49.636801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=420290 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 420290 /var/tmp/bdevperf.sock 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 420290 ']' 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:37.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:37.882 { 00:37:37.882 "params": { 00:37:37.882 "name": "Nvme$subsystem", 00:37:37.882 "trtype": "$TEST_TRANSPORT", 00:37:37.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:37.882 "adrfam": "ipv4", 00:37:37.882 "trsvcid": "$NVMF_PORT", 00:37:37.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:37.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:37.882 "hdgst": ${hdgst:-false}, 00:37:37.882 "ddgst": ${ddgst:-false} 00:37:37.882 }, 00:37:37.882 "method": "bdev_nvme_attach_controller" 00:37:37.882 } 00:37:37.882 EOF 00:37:37.882 )") 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:37.882 20:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:37.882 "params": { 00:37:37.882 "name": "Nvme0", 00:37:37.882 "trtype": "tcp", 00:37:37.882 "traddr": "10.0.0.2", 00:37:37.882 "adrfam": "ipv4", 00:37:37.882 "trsvcid": "4420", 00:37:37.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:37.882 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:37.882 "hdgst": false, 00:37:37.882 "ddgst": false 00:37:37.882 }, 00:37:37.882 "method": "bdev_nvme_attach_controller" 00:37:37.882 }' 00:37:37.882 [2024-11-18 20:38:49.721394] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:37:37.882 [2024-11-18 20:38:49.721484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420290 ] 00:37:37.882 [2024-11-18 20:38:49.790403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:37.882 [2024-11-18 20:38:49.836848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:38.141 Running I/O for 10 seconds... 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:37:38.141 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:38.399 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:38.399 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:38.399 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:38.399 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:38.400 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.400 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:38.400 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.400 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=554 00:37:38.400 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 554 -ge 100 ']' 00:37:38.400 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:38.400 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:38.400 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:38.400 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:38.400 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.400 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:38.400 [2024-11-18 20:38:50.404694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.404996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.405018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.405031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.405043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.405055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.405066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.405078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.405090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.405101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.405113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.405125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.405137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.405148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.405159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.400 [2024-11-18 20:38:50.405171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d6c0 is same with the state(6) to be set 00:37:38.660 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.660 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:38.660 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.660 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:38.660 [2024-11-18 20:38:50.412514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.412560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.412590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.412607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.412624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.412647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.412666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.412680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.412711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.412726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.412741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.412754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.412769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.412783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.412798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.412812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.412826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.412840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.412854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.412868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.412884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.412897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.412912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.412926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.412951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.412965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.412980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.412994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.413009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.413022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.413038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.413051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.413066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.413082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.413099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.660 [2024-11-18 20:38:50.413113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.660 [2024-11-18 20:38:50.413127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.413979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.413994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.414007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.414022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.414036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.414050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.414064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.414079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.414093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.414108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.414122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.414137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.414151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.414166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.414180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.414204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.414219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.414234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.414247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.414262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.414277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.661 [2024-11-18 20:38:50.414292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.661 [2024-11-18 20:38:50.414305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.662 [2024-11-18 20:38:50.414320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.662 [2024-11-18 20:38:50.414334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.662 [2024-11-18 20:38:50.414349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.662 [2024-11-18 20:38:50.414363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.662 [2024-11-18 20:38:50.414378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.662 [2024-11-18 20:38:50.414392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.662 [2024-11-18 20:38:50.414407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.662 [2024-11-18 20:38:50.414421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.662 [2024-11-18 20:38:50.414436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.662 [2024-11-18 20:38:50.414450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.662 [2024-11-18 20:38:50.414465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.662 [2024-11-18 20:38:50.414478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.662 [2024-11-18 20:38:50.414515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:38.662 [2024-11-18 20:38:50.414663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:38.662 [2024-11-18 20:38:50.414696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.662 [2024-11-18 20:38:50.414711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:38.662 [2024-11-18 20:38:50.414725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.662 [2024-11-18 20:38:50.414743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:38.662 [2024-11-18 20:38:50.414757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.662 [2024-11-18 20:38:50.414771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:38.662 [2024-11-18 20:38:50.414784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.662 [2024-11-18 20:38:50.414797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2d70 is same with the state(6) to be set 00:37:38.662 [2024-11-18 20:38:50.415897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:38.662 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.662 20:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:38.662 task offset: 81920 on job bdev=Nvme0n1 fails 00:37:38.662 00:37:38.662 Latency(us) 00:37:38.662 [2024-11-18T19:38:50.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.662 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:38.662 Job: Nvme0n1 ended in about 0.41 seconds with error 00:37:38.662 Verification LBA range: start 0x0 length 0x400 00:37:38.662 Nvme0n1 : 0.41 1572.05 98.25 157.21 0.00 35932.50 2936.98 34758.35 00:37:38.662 [2024-11-18T19:38:50.670Z] =================================================================================================================== 00:37:38.662 [2024-11-18T19:38:50.670Z] Total : 1572.05 98.25 157.21 0.00 35932.50 2936.98 34758.35 00:37:38.662 [2024-11-18 20:38:50.417797] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:38.662 [2024-11-18 20:38:50.417823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2d70 (9): Bad file descriptor 00:37:38.662 [2024-11-18 20:38:50.461888] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:37:39.596 20:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 420290 00:37:39.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (420290) - No such process 00:37:39.596 20:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:39.596 20:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:39.596 20:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:39.596 20:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:39.596 20:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:39.596 20:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:39.596 20:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:39.596 20:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:39.596 { 00:37:39.596 "params": { 00:37:39.596 "name": "Nvme$subsystem", 00:37:39.596 "trtype": "$TEST_TRANSPORT", 00:37:39.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:39.596 "adrfam": "ipv4", 00:37:39.596 "trsvcid": "$NVMF_PORT", 00:37:39.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:39.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:39.596 "hdgst": ${hdgst:-false}, 00:37:39.596 "ddgst": ${ddgst:-false} 00:37:39.596 }, 00:37:39.596 "method": "bdev_nvme_attach_controller" 00:37:39.596 } 00:37:39.596 EOF 00:37:39.596 )") 00:37:39.596 20:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:39.596 20:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:39.596 20:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:39.596 20:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:39.596 "params": { 00:37:39.596 "name": "Nvme0", 00:37:39.596 "trtype": "tcp", 00:37:39.596 "traddr": "10.0.0.2", 00:37:39.596 "adrfam": "ipv4", 00:37:39.596 "trsvcid": "4420", 00:37:39.596 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:39.596 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:39.596 "hdgst": false, 00:37:39.596 "ddgst": false 00:37:39.596 }, 00:37:39.596 "method": "bdev_nvme_attach_controller" 00:37:39.596 }' 00:37:39.596 [2024-11-18 20:38:51.468337] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:37:39.596 [2024-11-18 20:38:51.468427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420556 ] 00:37:39.596 [2024-11-18 20:38:51.539462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:39.596 [2024-11-18 20:38:51.588539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:40.162 Running I/O for 1 seconds... 00:37:41.096 1664.00 IOPS, 104.00 MiB/s 00:37:41.096 Latency(us) 00:37:41.096 [2024-11-18T19:38:53.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.096 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:41.096 Verification LBA range: start 0x0 length 0x400 00:37:41.096 Nvme0n1 : 1.02 1696.27 106.02 0.00 0.00 37119.01 6189.51 33204.91 00:37:41.096 [2024-11-18T19:38:53.104Z] =================================================================================================================== 00:37:41.096 [2024-11-18T19:38:53.104Z] Total : 1696.27 106.02 0.00 0.00 37119.01 6189.51 33204.91 00:37:41.096 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:41.096 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:41.096 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:41.096 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:41.096 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:41.096 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:41.096 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:41.096 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:41.096 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:41.096 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:41.096 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:41.096 rmmod nvme_tcp 00:37:41.356 rmmod nvme_fabrics 00:37:41.356 rmmod nvme_keyring 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 420235 ']' 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 420235 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 420235 ']' 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 420235 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 420235 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 420235' 00:37:41.356 killing process with pid 420235 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 420235 00:37:41.356 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 420235 00:37:41.617 [2024-11-18 20:38:53.390608] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:41.617 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:41.617 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:41.617 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:41.617 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:41.617 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:41.617 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:41.617 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:41.617 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:41.617 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:41.617 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:41.617 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:41.617 20:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:43.526 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:43.526 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:43.526 00:37:43.526 real 0m8.702s 00:37:43.526 user 0m16.951s 00:37:43.526 sys 0m3.747s 00:37:43.526 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:43.526 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.526 ************************************ 00:37:43.526 END TEST nvmf_host_management 00:37:43.526 ************************************ 00:37:43.526 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:43.526 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:43.526 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:43.526 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:43.526 ************************************ 00:37:43.526 START TEST nvmf_lvol 00:37:43.526 ************************************ 00:37:43.527 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:43.785 * Looking for test storage... 00:37:43.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:43.785 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:43.785 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:43.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.786 --rc genhtml_branch_coverage=1 00:37:43.786 --rc genhtml_function_coverage=1 00:37:43.786 --rc genhtml_legend=1 00:37:43.786 --rc geninfo_all_blocks=1 00:37:43.786 --rc geninfo_unexecuted_blocks=1 00:37:43.786 00:37:43.786 ' 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:43.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.786 --rc genhtml_branch_coverage=1 00:37:43.786 --rc genhtml_function_coverage=1 00:37:43.786 --rc genhtml_legend=1 00:37:43.786 --rc geninfo_all_blocks=1 00:37:43.786 --rc geninfo_unexecuted_blocks=1 00:37:43.786 00:37:43.786 ' 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:43.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.786 --rc genhtml_branch_coverage=1 00:37:43.786 --rc genhtml_function_coverage=1 00:37:43.786 --rc genhtml_legend=1 00:37:43.786 --rc geninfo_all_blocks=1 00:37:43.786 --rc geninfo_unexecuted_blocks=1 00:37:43.786 00:37:43.786 ' 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:43.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.786 --rc genhtml_branch_coverage=1 00:37:43.786 --rc genhtml_function_coverage=1 00:37:43.786 --rc genhtml_legend=1 00:37:43.786 --rc geninfo_all_blocks=1 00:37:43.786 --rc geninfo_unexecuted_blocks=1 00:37:43.786 00:37:43.786 ' 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:43.786 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:43.787 20:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:45.690 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:45.951 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:45.951 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:45.951 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:45.951 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:45.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:45.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:37:45.951 00:37:45.951 --- 10.0.0.2 ping statistics --- 00:37:45.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.951 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:45.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:45.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:37:45.951 00:37:45.951 --- 10.0.0.1 ping statistics --- 00:37:45.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.951 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:45.951 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=422635 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 422635 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 422635 ']' 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:45.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:45.952 20:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:45.952 [2024-11-18 20:38:57.907235] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:45.952 [2024-11-18 20:38:57.908383] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:37:45.952 [2024-11-18 20:38:57.908437] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:46.210 [2024-11-18 20:38:57.980784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:46.210 [2024-11-18 20:38:58.028532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:46.210 [2024-11-18 20:38:58.028586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:46.210 [2024-11-18 20:38:58.028599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:46.210 [2024-11-18 20:38:58.028611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:46.210 [2024-11-18 20:38:58.028620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:46.210 [2024-11-18 20:38:58.030124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:46.210 [2024-11-18 20:38:58.030189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:46.210 [2024-11-18 20:38:58.030196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:46.210 [2024-11-18 20:38:58.118783] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:46.210 [2024-11-18 20:38:58.118974] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:46.210 [2024-11-18 20:38:58.118984] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:46.210 [2024-11-18 20:38:58.119238] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:46.210 20:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:46.210 20:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:46.210 20:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:46.210 20:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:46.210 20:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:46.210 20:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:46.210 20:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:46.468 [2024-11-18 20:38:58.414859] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:46.468 20:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:47.034 20:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:47.034 20:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:47.034 20:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:47.034 20:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:47.600 20:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:47.600 20:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cf0e5f75-6056-46c4-aa5c-a2434d449dfa 00:37:47.600 20:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cf0e5f75-6056-46c4-aa5c-a2434d449dfa lvol 20 00:37:48.168 20:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3055ed11-502b-405f-8ecb-04e56147d3e5 00:37:48.168 20:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:48.168 20:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3055ed11-502b-405f-8ecb-04e56147d3e5 00:37:48.734 20:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:48.734 [2024-11-18 20:39:00.711053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:48.734 20:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:49.298 20:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=423124 00:37:49.298 20:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:49.298 20:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:50.231 20:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3055ed11-502b-405f-8ecb-04e56147d3e5 MY_SNAPSHOT 00:37:50.489 20:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=30f836a9-0f33-4f8c-b6f9-80fab7fe91d4 00:37:50.489 20:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3055ed11-502b-405f-8ecb-04e56147d3e5 30 00:37:50.747 20:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 30f836a9-0f33-4f8c-b6f9-80fab7fe91d4 MY_CLONE 00:37:51.005 20:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6ad97087-8471-4418-a3e7-1ecd446140f5 00:37:51.005 20:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6ad97087-8471-4418-a3e7-1ecd446140f5 00:37:51.571 20:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 423124 00:37:59.682 Initializing NVMe Controllers 00:37:59.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:59.682 Controller IO queue size 128, less than required. 00:37:59.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:59.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:59.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:59.682 Initialization complete. Launching workers. 00:37:59.682 ======================================================== 00:37:59.682 Latency(us) 00:37:59.682 Device Information : IOPS MiB/s Average min max 00:37:59.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10325.01 40.33 12407.47 1204.18 85415.33 00:37:59.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10251.21 40.04 12490.56 2991.64 88513.38 00:37:59.682 ======================================================== 00:37:59.682 Total : 20576.22 80.38 12448.87 1204.18 88513.38 00:37:59.682 00:37:59.682 20:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:59.941 20:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3055ed11-502b-405f-8ecb-04e56147d3e5 00:38:00.198 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cf0e5f75-6056-46c4-aa5c-a2434d449dfa 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:00.458 rmmod nvme_tcp 00:38:00.458 rmmod nvme_fabrics 00:38:00.458 rmmod nvme_keyring 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 422635 ']' 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 422635 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 422635 ']' 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 422635 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 422635 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 422635' 00:38:00.458 killing process with pid 422635 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 422635 00:38:00.458 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 422635 00:38:00.718 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:00.718 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:00.718 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:00.718 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:38:00.718 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:38:00.718 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:00.718 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:38:00.718 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:00.718 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:00.718 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.718 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:00.718 20:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:03.259 00:38:03.259 real 0m19.196s 00:38:03.259 user 0m56.944s 00:38:03.259 sys 0m7.677s 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:03.259 ************************************ 00:38:03.259 END TEST nvmf_lvol 00:38:03.259 ************************************ 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:03.259 ************************************ 00:38:03.259 START TEST nvmf_lvs_grow 00:38:03.259 ************************************ 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:03.259 * Looking for test storage... 00:38:03.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:03.259 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:03.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.260 --rc genhtml_branch_coverage=1 00:38:03.260 --rc genhtml_function_coverage=1 00:38:03.260 --rc genhtml_legend=1 00:38:03.260 --rc geninfo_all_blocks=1 00:38:03.260 --rc geninfo_unexecuted_blocks=1 00:38:03.260 00:38:03.260 ' 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:03.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.260 --rc genhtml_branch_coverage=1 00:38:03.260 --rc genhtml_function_coverage=1 00:38:03.260 --rc genhtml_legend=1 00:38:03.260 --rc geninfo_all_blocks=1 00:38:03.260 --rc geninfo_unexecuted_blocks=1 00:38:03.260 00:38:03.260 ' 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:03.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.260 --rc genhtml_branch_coverage=1 00:38:03.260 --rc genhtml_function_coverage=1 00:38:03.260 --rc genhtml_legend=1 00:38:03.260 --rc geninfo_all_blocks=1 00:38:03.260 --rc geninfo_unexecuted_blocks=1 00:38:03.260 00:38:03.260 ' 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:03.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.260 --rc genhtml_branch_coverage=1 00:38:03.260 --rc genhtml_function_coverage=1 00:38:03.260 --rc genhtml_legend=1 00:38:03.260 --rc geninfo_all_blocks=1 00:38:03.260 --rc geninfo_unexecuted_blocks=1 00:38:03.260 00:38:03.260 ' 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:03.260 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:03.261 20:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:05.227 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:05.227 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:05.227 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:05.227 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:05.228 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:05.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:05.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:38:05.228 00:38:05.228 --- 10.0.0.2 ping statistics --- 00:38:05.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.228 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:05.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:05.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:38:05.228 00:38:05.228 --- 10.0.0.1 ping statistics --- 00:38:05.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.228 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:05.228 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:05.486 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:05.486 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:05.486 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:05.486 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:05.486 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=426932 00:38:05.486 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:05.486 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 426932 00:38:05.486 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 426932 ']' 00:38:05.486 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:05.486 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:05.486 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:05.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:05.486 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:05.486 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:05.486 [2024-11-18 20:39:17.289476] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:05.486 [2024-11-18 20:39:17.290537] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:05.486 [2024-11-18 20:39:17.290607] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:05.486 [2024-11-18 20:39:17.366103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:05.486 [2024-11-18 20:39:17.414101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:05.486 [2024-11-18 20:39:17.414148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:05.486 [2024-11-18 20:39:17.414176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:05.486 [2024-11-18 20:39:17.414187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:05.486 [2024-11-18 20:39:17.414197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:05.486 [2024-11-18 20:39:17.414753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:05.746 [2024-11-18 20:39:17.497786] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:05.746 [2024-11-18 20:39:17.498108] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:05.746 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:05.746 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:05.746 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:05.746 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:05.746 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:05.746 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:05.746 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:06.006 [2024-11-18 20:39:17.811330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:06.006 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:06.006 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:06.006 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:06.006 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:06.006 ************************************ 00:38:06.006 START TEST lvs_grow_clean 00:38:06.006 ************************************ 00:38:06.006 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:06.006 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:06.006 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:06.006 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:06.006 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:06.006 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:06.006 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:06.006 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:06.006 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:06.006 20:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:06.267 20:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:06.267 20:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:06.526 20:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4929310c-c0ff-42a9-83da-a94df9559a1b 00:38:06.526 20:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4929310c-c0ff-42a9-83da-a94df9559a1b 00:38:06.526 20:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:06.784 20:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:06.784 20:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:06.784 20:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4929310c-c0ff-42a9-83da-a94df9559a1b lvol 150 00:38:07.042 20:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=651fdeeb-050e-49c5-a39a-8dc79b7bfc92 00:38:07.042 20:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:07.042 20:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:07.300 [2024-11-18 20:39:19.235218] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:07.300 [2024-11-18 20:39:19.235299] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:07.300 true 00:38:07.300 20:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4929310c-c0ff-42a9-83da-a94df9559a1b 00:38:07.300 20:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:07.557 20:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:07.557 20:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:07.816 20:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 651fdeeb-050e-49c5-a39a-8dc79b7bfc92 00:38:08.387 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:08.387 [2024-11-18 20:39:20.363567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:08.387 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:08.953 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=427372 00:38:08.954 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:08.954 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:08.954 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 427372 /var/tmp/bdevperf.sock 00:38:08.954 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 427372 ']' 00:38:08.954 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:08.954 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:08.954 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:08.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:08.954 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:08.954 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:08.954 [2024-11-18 20:39:20.707674] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:08.954 [2024-11-18 20:39:20.707772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427372 ] 00:38:08.954 [2024-11-18 20:39:20.776849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:08.954 [2024-11-18 20:39:20.824385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:08.954 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:08.954 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:08.954 20:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:09.524 Nvme0n1 00:38:09.524 20:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:09.524 [ 00:38:09.524 { 00:38:09.524 "name": "Nvme0n1", 00:38:09.524 "aliases": [ 00:38:09.524 "651fdeeb-050e-49c5-a39a-8dc79b7bfc92" 00:38:09.524 ], 00:38:09.524 "product_name": "NVMe disk", 00:38:09.524 "block_size": 4096, 00:38:09.524 "num_blocks": 38912, 00:38:09.524 "uuid": "651fdeeb-050e-49c5-a39a-8dc79b7bfc92", 00:38:09.524 "numa_id": 0, 00:38:09.524 "assigned_rate_limits": { 00:38:09.524 "rw_ios_per_sec": 0, 00:38:09.524 "rw_mbytes_per_sec": 0, 00:38:09.524 "r_mbytes_per_sec": 0, 00:38:09.524 "w_mbytes_per_sec": 0 00:38:09.524 }, 00:38:09.524 "claimed": false, 00:38:09.524 "zoned": false, 00:38:09.524 "supported_io_types": { 00:38:09.524 "read": true, 00:38:09.524 "write": true, 00:38:09.524 "unmap": true, 00:38:09.524 "flush": true, 00:38:09.524 "reset": true, 00:38:09.524 "nvme_admin": true, 00:38:09.524 "nvme_io": true, 00:38:09.524 "nvme_io_md": false, 00:38:09.524 "write_zeroes": true, 00:38:09.524 "zcopy": false, 00:38:09.524 "get_zone_info": false, 00:38:09.524 "zone_management": false, 00:38:09.524 "zone_append": false, 00:38:09.524 "compare": true, 00:38:09.524 "compare_and_write": true, 00:38:09.524 "abort": true, 00:38:09.524 "seek_hole": false, 00:38:09.524 "seek_data": false, 00:38:09.524 "copy": true, 00:38:09.524 "nvme_iov_md": false 00:38:09.524 }, 00:38:09.524 "memory_domains": [ 00:38:09.524 { 00:38:09.524 "dma_device_id": "system", 00:38:09.524 "dma_device_type": 1 00:38:09.524 } 00:38:09.524 ], 00:38:09.524 "driver_specific": { 00:38:09.524 "nvme": [ 00:38:09.524 { 00:38:09.524 "trid": { 00:38:09.524 "trtype": "TCP", 00:38:09.524 "adrfam": "IPv4", 00:38:09.524 "traddr": "10.0.0.2", 00:38:09.525 "trsvcid": "4420", 00:38:09.525 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:09.525 }, 00:38:09.525 "ctrlr_data": { 00:38:09.525 "cntlid": 1, 00:38:09.525 "vendor_id": "0x8086", 00:38:09.525 "model_number": "SPDK bdev Controller", 00:38:09.525 "serial_number": "SPDK0", 00:38:09.525 "firmware_revision": "25.01", 00:38:09.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:09.525 "oacs": { 00:38:09.525 "security": 0, 00:38:09.525 "format": 0, 00:38:09.525 "firmware": 0, 00:38:09.525 "ns_manage": 0 00:38:09.525 }, 00:38:09.525 "multi_ctrlr": true, 00:38:09.525 "ana_reporting": false 00:38:09.525 }, 00:38:09.525 "vs": { 00:38:09.525 "nvme_version": "1.3" 00:38:09.525 }, 00:38:09.525 "ns_data": { 00:38:09.525 "id": 1, 00:38:09.525 "can_share": true 00:38:09.525 } 00:38:09.525 } 00:38:09.525 ], 00:38:09.525 "mp_policy": "active_passive" 00:38:09.525 } 00:38:09.525 } 00:38:09.525 ] 00:38:09.784 20:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=427499 00:38:09.784 20:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:09.784 20:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:09.784 Running I/O for 10 seconds... 00:38:10.725 Latency(us) 00:38:10.725 [2024-11-18T19:39:22.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:10.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:10.725 Nvme0n1 : 1.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:38:10.725 [2024-11-18T19:39:22.733Z] =================================================================================================================== 00:38:10.725 [2024-11-18T19:39:22.733Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:38:10.725 00:38:11.660 20:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4929310c-c0ff-42a9-83da-a94df9559a1b 00:38:11.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:11.660 Nvme0n1 : 2.00 14939.50 58.36 0.00 0.00 0.00 0.00 0.00 00:38:11.660 [2024-11-18T19:39:23.668Z] =================================================================================================================== 00:38:11.660 [2024-11-18T19:39:23.668Z] Total : 14939.50 58.36 0.00 0.00 0.00 0.00 0.00 00:38:11.660 00:38:11.919 true 00:38:11.919 20:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4929310c-c0ff-42a9-83da-a94df9559a1b 00:38:11.919 20:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:12.178 20:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:12.178 20:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:12.178 20:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 427499 00:38:12.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:12.748 Nvme0n1 : 3.00 15039.67 58.75 0.00 0.00 0.00 0.00 0.00 00:38:12.748 [2024-11-18T19:39:24.756Z] =================================================================================================================== 00:38:12.748 [2024-11-18T19:39:24.756Z] Total : 15039.67 58.75 0.00 0.00 0.00 0.00 0.00 00:38:12.748 00:38:13.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:13.688 Nvme0n1 : 4.00 15121.50 59.07 0.00 0.00 0.00 0.00 0.00 00:38:13.688 [2024-11-18T19:39:25.696Z] =================================================================================================================== 00:38:13.688 [2024-11-18T19:39:25.696Z] Total : 15121.50 59.07 0.00 0.00 0.00 0.00 0.00 00:38:13.688 00:38:15.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:15.069 Nvme0n1 : 5.00 15196.00 59.36 0.00 0.00 0.00 0.00 0.00 00:38:15.069 [2024-11-18T19:39:27.077Z] =================================================================================================================== 00:38:15.069 [2024-11-18T19:39:27.077Z] Total : 15196.00 59.36 0.00 0.00 0.00 0.00 0.00 00:38:15.069 00:38:16.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:16.011 Nvme0n1 : 6.00 15245.67 59.55 0.00 0.00 0.00 0.00 0.00 00:38:16.011 [2024-11-18T19:39:28.019Z] =================================================================================================================== 00:38:16.011 [2024-11-18T19:39:28.019Z] Total : 15245.67 59.55 0.00 0.00 0.00 0.00 0.00 00:38:16.011 00:38:16.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:16.951 Nvme0n1 : 7.00 15299.29 59.76 0.00 0.00 0.00 0.00 0.00 00:38:16.951 [2024-11-18T19:39:28.959Z] =================================================================================================================== 00:38:16.951 [2024-11-18T19:39:28.959Z] Total : 15299.29 59.76 0.00 0.00 0.00 0.00 0.00 00:38:16.951 00:38:17.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:17.890 Nvme0n1 : 8.00 15339.50 59.92 0.00 0.00 0.00 0.00 0.00 00:38:17.890 [2024-11-18T19:39:29.898Z] =================================================================================================================== 00:38:17.890 [2024-11-18T19:39:29.898Z] Total : 15339.50 59.92 0.00 0.00 0.00 0.00 0.00 00:38:17.890 00:38:18.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:18.824 Nvme0n1 : 9.00 15342.56 59.93 0.00 0.00 0.00 0.00 0.00 00:38:18.824 [2024-11-18T19:39:30.832Z] =================================================================================================================== 00:38:18.824 [2024-11-18T19:39:30.832Z] Total : 15342.56 59.93 0.00 0.00 0.00 0.00 0.00 00:38:18.824 00:38:19.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:19.765 Nvme0n1 : 10.00 15357.70 59.99 0.00 0.00 0.00 0.00 0.00 00:38:19.765 [2024-11-18T19:39:31.773Z] =================================================================================================================== 00:38:19.765 [2024-11-18T19:39:31.773Z] Total : 15357.70 59.99 0.00 0.00 0.00 0.00 0.00 00:38:19.765 00:38:19.765 00:38:19.765 Latency(us) 00:38:19.765 [2024-11-18T19:39:31.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:19.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:19.765 Nvme0n1 : 10.01 15362.67 60.01 0.00 0.00 8327.38 4296.25 18058.81 00:38:19.765 [2024-11-18T19:39:31.773Z] =================================================================================================================== 00:38:19.765 [2024-11-18T19:39:31.773Z] Total : 15362.67 60.01 0.00 0.00 8327.38 4296.25 18058.81 00:38:19.765 { 00:38:19.765 "results": [ 00:38:19.765 { 00:38:19.765 "job": "Nvme0n1", 00:38:19.765 "core_mask": "0x2", 00:38:19.765 "workload": "randwrite", 00:38:19.765 "status": "finished", 00:38:19.765 "queue_depth": 128, 00:38:19.765 "io_size": 4096, 00:38:19.765 "runtime": 10.005099, 00:38:19.765 "iops": 15362.666576312738, 00:38:19.765 "mibps": 60.010416313721635, 00:38:19.765 "io_failed": 0, 00:38:19.765 "io_timeout": 0, 00:38:19.765 "avg_latency_us": 8327.384390348514, 00:38:19.765 "min_latency_us": 4296.248888888889, 00:38:19.765 "max_latency_us": 18058.80888888889 00:38:19.765 } 00:38:19.765 ], 00:38:19.765 "core_count": 1 00:38:19.765 } 00:38:19.765 20:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 427372 00:38:19.765 20:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 427372 ']' 00:38:19.765 20:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 427372 00:38:19.765 20:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:19.765 20:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:19.765 20:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 427372 00:38:19.765 20:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:19.765 20:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:19.765 20:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 427372' 00:38:19.765 killing process with pid 427372 00:38:19.765 20:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 427372 00:38:19.765 Received shutdown signal, test time was about 10.000000 seconds 00:38:19.765 00:38:19.765 Latency(us) 00:38:19.765 [2024-11-18T19:39:31.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:19.765 [2024-11-18T19:39:31.773Z] =================================================================================================================== 00:38:19.765 [2024-11-18T19:39:31.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:19.765 20:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 427372 00:38:20.024 20:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:20.283 20:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:20.541 20:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4929310c-c0ff-42a9-83da-a94df9559a1b 00:38:20.541 20:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:20.799 20:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:20.799 20:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:20.799 20:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:21.059 [2024-11-18 20:39:33.019285] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:21.059 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4929310c-c0ff-42a9-83da-a94df9559a1b 00:38:21.059 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:38:21.059 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4929310c-c0ff-42a9-83da-a94df9559a1b 00:38:21.059 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:21.059 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:21.059 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:21.059 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:21.059 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:21.059 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:21.059 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:21.059 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:21.059 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4929310c-c0ff-42a9-83da-a94df9559a1b 00:38:21.319 request: 00:38:21.319 { 00:38:21.319 "uuid": "4929310c-c0ff-42a9-83da-a94df9559a1b", 00:38:21.319 "method": "bdev_lvol_get_lvstores", 00:38:21.319 "req_id": 1 00:38:21.319 } 00:38:21.319 Got JSON-RPC error response 00:38:21.319 response: 00:38:21.319 { 00:38:21.319 "code": -19, 00:38:21.319 "message": "No such device" 00:38:21.319 } 00:38:21.579 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:38:21.579 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:21.579 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:21.579 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:21.579 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:21.839 aio_bdev 00:38:21.839 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 651fdeeb-050e-49c5-a39a-8dc79b7bfc92 00:38:21.839 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=651fdeeb-050e-49c5-a39a-8dc79b7bfc92 00:38:21.839 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:21.840 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:38:21.840 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:21.840 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:21.840 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:22.100 20:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 651fdeeb-050e-49c5-a39a-8dc79b7bfc92 -t 2000 00:38:22.359 [ 00:38:22.359 { 00:38:22.359 "name": "651fdeeb-050e-49c5-a39a-8dc79b7bfc92", 00:38:22.359 "aliases": [ 00:38:22.359 "lvs/lvol" 00:38:22.359 ], 00:38:22.359 "product_name": "Logical Volume", 00:38:22.359 "block_size": 4096, 00:38:22.359 "num_blocks": 38912, 00:38:22.359 "uuid": "651fdeeb-050e-49c5-a39a-8dc79b7bfc92", 00:38:22.359 "assigned_rate_limits": { 00:38:22.359 "rw_ios_per_sec": 0, 00:38:22.359 "rw_mbytes_per_sec": 0, 00:38:22.359 "r_mbytes_per_sec": 0, 00:38:22.359 "w_mbytes_per_sec": 0 00:38:22.359 }, 00:38:22.359 "claimed": false, 00:38:22.359 "zoned": false, 00:38:22.359 "supported_io_types": { 00:38:22.359 "read": true, 00:38:22.359 "write": true, 00:38:22.359 "unmap": true, 00:38:22.359 "flush": false, 00:38:22.359 "reset": true, 00:38:22.359 "nvme_admin": false, 00:38:22.359 "nvme_io": false, 00:38:22.359 "nvme_io_md": false, 00:38:22.359 "write_zeroes": true, 00:38:22.359 "zcopy": false, 00:38:22.359 "get_zone_info": false, 00:38:22.359 "zone_management": false, 00:38:22.359 "zone_append": false, 00:38:22.359 "compare": false, 00:38:22.359 "compare_and_write": false, 00:38:22.359 "abort": false, 00:38:22.359 "seek_hole": true, 00:38:22.359 "seek_data": true, 00:38:22.359 "copy": false, 00:38:22.359 "nvme_iov_md": false 00:38:22.359 }, 00:38:22.359 "driver_specific": { 00:38:22.359 "lvol": { 00:38:22.359 "lvol_store_uuid": "4929310c-c0ff-42a9-83da-a94df9559a1b", 00:38:22.359 "base_bdev": "aio_bdev", 00:38:22.359 "thin_provision": false, 00:38:22.359 "num_allocated_clusters": 38, 00:38:22.359 "snapshot": false, 00:38:22.359 "clone": false, 00:38:22.359 "esnap_clone": false 00:38:22.359 } 00:38:22.359 } 00:38:22.359 } 00:38:22.359 ] 00:38:22.360 20:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:38:22.360 20:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4929310c-c0ff-42a9-83da-a94df9559a1b 00:38:22.360 20:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:22.620 20:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:22.620 20:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4929310c-c0ff-42a9-83da-a94df9559a1b 00:38:22.620 20:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:22.881 20:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:22.881 20:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 651fdeeb-050e-49c5-a39a-8dc79b7bfc92 00:38:23.141 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4929310c-c0ff-42a9-83da-a94df9559a1b 00:38:23.400 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:23.660 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:23.660 00:38:23.660 real 0m17.805s 00:38:23.660 user 0m17.322s 00:38:23.660 sys 0m1.879s 00:38:23.660 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:23.660 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:23.660 ************************************ 00:38:23.660 END TEST lvs_grow_clean 00:38:23.660 ************************************ 00:38:23.921 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:23.921 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:23.921 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:23.921 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:23.921 ************************************ 00:38:23.921 START TEST lvs_grow_dirty 00:38:23.921 ************************************ 00:38:23.921 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:38:23.921 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:23.921 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:23.921 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:23.921 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:23.921 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:23.921 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:23.921 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:23.921 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:23.921 20:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:24.182 20:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:24.182 20:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:24.440 20:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c159d6b4-803a-4e04-bcb6-225ffcec70d4 00:38:24.440 20:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c159d6b4-803a-4e04-bcb6-225ffcec70d4 00:38:24.440 20:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:24.699 20:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:24.699 20:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:24.699 20:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c159d6b4-803a-4e04-bcb6-225ffcec70d4 lvol 150 00:38:24.957 20:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e333e07b-7ca9-4d3b-89d3-98e91f243c8b 00:38:24.957 20:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:24.957 20:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:25.215 [2024-11-18 20:39:37.103220] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:25.215 [2024-11-18 20:39:37.103319] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:25.215 true 00:38:25.215 20:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c159d6b4-803a-4e04-bcb6-225ffcec70d4 00:38:25.215 20:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:25.473 20:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:25.473 20:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:25.732 20:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e333e07b-7ca9-4d3b-89d3-98e91f243c8b 00:38:25.992 20:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:26.252 [2024-11-18 20:39:38.211513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:26.252 20:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:26.512 20:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=429515 00:38:26.512 20:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:26.512 20:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:26.512 20:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 429515 /var/tmp/bdevperf.sock 00:38:26.512 20:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 429515 ']' 00:38:26.512 20:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:26.512 20:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:26.512 20:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:26.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:26.512 20:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:26.512 20:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:26.770 [2024-11-18 20:39:38.550162] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:26.770 [2024-11-18 20:39:38.550251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429515 ] 00:38:26.770 [2024-11-18 20:39:38.619091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.770 [2024-11-18 20:39:38.666655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:27.027 20:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:27.027 20:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:27.027 20:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:27.289 Nvme0n1 00:38:27.289 20:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:27.547 [ 00:38:27.547 { 00:38:27.547 "name": "Nvme0n1", 00:38:27.547 "aliases": [ 00:38:27.547 "e333e07b-7ca9-4d3b-89d3-98e91f243c8b" 00:38:27.547 ], 00:38:27.547 "product_name": "NVMe disk", 00:38:27.547 "block_size": 4096, 00:38:27.547 "num_blocks": 38912, 00:38:27.547 "uuid": "e333e07b-7ca9-4d3b-89d3-98e91f243c8b", 00:38:27.547 "numa_id": 0, 00:38:27.547 "assigned_rate_limits": { 00:38:27.547 "rw_ios_per_sec": 0, 00:38:27.547 "rw_mbytes_per_sec": 0, 00:38:27.547 "r_mbytes_per_sec": 0, 00:38:27.547 "w_mbytes_per_sec": 0 00:38:27.547 }, 00:38:27.547 "claimed": false, 00:38:27.547 "zoned": false, 00:38:27.547 "supported_io_types": { 00:38:27.547 "read": true, 00:38:27.547 "write": true, 00:38:27.547 "unmap": true, 00:38:27.547 "flush": true, 00:38:27.547 "reset": true, 00:38:27.547 "nvme_admin": true, 00:38:27.547 "nvme_io": true, 00:38:27.547 "nvme_io_md": false, 00:38:27.547 "write_zeroes": true, 00:38:27.547 "zcopy": false, 00:38:27.547 "get_zone_info": false, 00:38:27.547 "zone_management": false, 00:38:27.547 "zone_append": false, 00:38:27.547 "compare": true, 00:38:27.547 "compare_and_write": true, 00:38:27.547 "abort": true, 00:38:27.547 "seek_hole": false, 00:38:27.547 "seek_data": false, 00:38:27.547 "copy": true, 00:38:27.547 "nvme_iov_md": false 00:38:27.547 }, 00:38:27.547 "memory_domains": [ 00:38:27.547 { 00:38:27.547 "dma_device_id": "system", 00:38:27.547 "dma_device_type": 1 00:38:27.547 } 00:38:27.547 ], 00:38:27.547 "driver_specific": { 00:38:27.547 "nvme": [ 00:38:27.547 { 00:38:27.547 "trid": { 00:38:27.547 "trtype": "TCP", 00:38:27.547 "adrfam": "IPv4", 00:38:27.547 "traddr": "10.0.0.2", 00:38:27.547 "trsvcid": "4420", 00:38:27.547 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:27.547 }, 00:38:27.547 "ctrlr_data": { 00:38:27.547 "cntlid": 1, 00:38:27.547 "vendor_id": "0x8086", 00:38:27.547 "model_number": "SPDK bdev Controller", 00:38:27.547 "serial_number": "SPDK0", 00:38:27.547 "firmware_revision": "25.01", 00:38:27.547 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:27.547 "oacs": { 00:38:27.547 "security": 0, 00:38:27.547 "format": 0, 00:38:27.547 "firmware": 0, 00:38:27.547 "ns_manage": 0 00:38:27.547 }, 00:38:27.547 "multi_ctrlr": true, 00:38:27.547 "ana_reporting": false 00:38:27.547 }, 00:38:27.547 "vs": { 00:38:27.547 "nvme_version": "1.3" 00:38:27.547 }, 00:38:27.547 "ns_data": { 00:38:27.547 "id": 1, 00:38:27.547 "can_share": true 00:38:27.547 } 00:38:27.547 } 00:38:27.547 ], 00:38:27.547 "mp_policy": "active_passive" 00:38:27.547 } 00:38:27.547 } 00:38:27.547 ] 00:38:27.547 20:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=429650 00:38:27.547 20:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:27.547 20:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:27.547 Running I/O for 10 seconds... 00:38:28.928 Latency(us) 00:38:28.928 [2024-11-18T19:39:40.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:28.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:28.928 Nvme0n1 : 1.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:38:28.928 [2024-11-18T19:39:40.936Z] =================================================================================================================== 00:38:28.928 [2024-11-18T19:39:40.936Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:38:28.928 00:38:29.495 20:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c159d6b4-803a-4e04-bcb6-225ffcec70d4 00:38:29.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:29.755 Nvme0n1 : 2.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:38:29.755 [2024-11-18T19:39:41.763Z] =================================================================================================================== 00:38:29.755 [2024-11-18T19:39:41.763Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:38:29.755 00:38:29.755 true 00:38:30.014 20:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c159d6b4-803a-4e04-bcb6-225ffcec70d4 00:38:30.014 20:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:30.273 20:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:30.273 20:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:30.273 20:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 429650 00:38:30.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:30.533 Nvme0n1 : 3.00 15070.67 58.87 0.00 0.00 0.00 0.00 0.00 00:38:30.533 [2024-11-18T19:39:42.541Z] =================================================================================================================== 00:38:30.533 [2024-11-18T19:39:42.541Z] Total : 15070.67 58.87 0.00 0.00 0.00 0.00 0.00 00:38:30.533 00:38:31.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:31.914 Nvme0n1 : 4.00 15176.50 59.28 0.00 0.00 0.00 0.00 0.00 00:38:31.914 [2024-11-18T19:39:43.923Z] =================================================================================================================== 00:38:31.915 [2024-11-18T19:39:43.923Z] Total : 15176.50 59.28 0.00 0.00 0.00 0.00 0.00 00:38:31.915 00:38:32.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:32.849 Nvme0n1 : 5.00 15265.40 59.63 0.00 0.00 0.00 0.00 0.00 00:38:32.849 [2024-11-18T19:39:44.857Z] =================================================================================================================== 00:38:32.849 [2024-11-18T19:39:44.857Z] Total : 15265.40 59.63 0.00 0.00 0.00 0.00 0.00 00:38:32.849 00:38:33.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:33.787 Nvme0n1 : 6.00 15309.17 59.80 0.00 0.00 0.00 0.00 0.00 00:38:33.787 [2024-11-18T19:39:45.795Z] =================================================================================================================== 00:38:33.787 [2024-11-18T19:39:45.795Z] Total : 15309.17 59.80 0.00 0.00 0.00 0.00 0.00 00:38:33.787 00:38:34.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:34.800 Nvme0n1 : 7.00 15317.43 59.83 0.00 0.00 0.00 0.00 0.00 00:38:34.800 [2024-11-18T19:39:46.808Z] =================================================================================================================== 00:38:34.800 [2024-11-18T19:39:46.808Z] Total : 15317.43 59.83 0.00 0.00 0.00 0.00 0.00 00:38:34.800 00:38:35.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:35.735 Nvme0n1 : 8.00 15355.38 59.98 0.00 0.00 0.00 0.00 0.00 00:38:35.735 [2024-11-18T19:39:47.743Z] =================================================================================================================== 00:38:35.735 [2024-11-18T19:39:47.743Z] Total : 15355.38 59.98 0.00 0.00 0.00 0.00 0.00 00:38:35.735 00:38:36.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:36.670 Nvme0n1 : 9.00 15399.00 60.15 0.00 0.00 0.00 0.00 0.00 00:38:36.670 [2024-11-18T19:39:48.678Z] =================================================================================================================== 00:38:36.670 [2024-11-18T19:39:48.678Z] Total : 15399.00 60.15 0.00 0.00 0.00 0.00 0.00 00:38:36.670 00:38:37.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:37.604 Nvme0n1 : 10.00 15421.20 60.24 0.00 0.00 0.00 0.00 0.00 00:38:37.604 [2024-11-18T19:39:49.612Z] =================================================================================================================== 00:38:37.604 [2024-11-18T19:39:49.612Z] Total : 15421.20 60.24 0.00 0.00 0.00 0.00 0.00 00:38:37.604 00:38:37.604 00:38:37.604 Latency(us) 00:38:37.604 [2024-11-18T19:39:49.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:37.604 Nvme0n1 : 10.01 15426.03 60.26 0.00 0.00 8293.16 4223.43 18058.81 00:38:37.604 [2024-11-18T19:39:49.612Z] =================================================================================================================== 00:38:37.604 [2024-11-18T19:39:49.612Z] Total : 15426.03 60.26 0.00 0.00 8293.16 4223.43 18058.81 00:38:37.604 { 00:38:37.604 "results": [ 00:38:37.604 { 00:38:37.604 "job": "Nvme0n1", 00:38:37.604 "core_mask": "0x2", 00:38:37.604 "workload": "randwrite", 00:38:37.604 "status": "finished", 00:38:37.604 "queue_depth": 128, 00:38:37.604 "io_size": 4096, 00:38:37.604 "runtime": 10.005167, 00:38:37.604 "iops": 15426.0293706242, 00:38:37.604 "mibps": 60.25792722900078, 00:38:37.604 "io_failed": 0, 00:38:37.604 "io_timeout": 0, 00:38:37.604 "avg_latency_us": 8293.159890535087, 00:38:37.604 "min_latency_us": 4223.431111111111, 00:38:37.604 "max_latency_us": 18058.80888888889 00:38:37.604 } 00:38:37.604 ], 00:38:37.604 "core_count": 1 00:38:37.604 } 00:38:37.604 20:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 429515 00:38:37.604 20:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 429515 ']' 00:38:37.604 20:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 429515 00:38:37.605 20:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:37.605 20:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:37.605 20:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 429515 00:38:37.605 20:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:37.605 20:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:37.605 20:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 429515' 00:38:37.605 killing process with pid 429515 00:38:37.605 20:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 429515 00:38:37.605 Received shutdown signal, test time was about 10.000000 seconds 00:38:37.605 00:38:37.605 Latency(us) 00:38:37.605 [2024-11-18T19:39:49.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.605 [2024-11-18T19:39:49.613Z] =================================================================================================================== 00:38:37.605 [2024-11-18T19:39:49.613Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:37.605 20:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 429515 00:38:37.863 20:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:38.121 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:38.690 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c159d6b4-803a-4e04-bcb6-225ffcec70d4 00:38:38.690 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:38.690 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:38.690 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:38.690 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 426932 00:38:38.690 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 426932 00:38:38.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 426932 Killed "${NVMF_APP[@]}" "$@" 00:38:38.949 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:38.949 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:38.949 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:38.949 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:38.949 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:38.949 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=430970 00:38:38.949 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:38.949 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 430970 00:38:38.949 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 430970 ']' 00:38:38.949 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:38.949 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:38.949 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:38.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:38.949 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:38.949 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:38.949 [2024-11-18 20:39:50.751899] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:38.949 [2024-11-18 20:39:50.753008] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:38.949 [2024-11-18 20:39:50.753076] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:38.949 [2024-11-18 20:39:50.824138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.949 [2024-11-18 20:39:50.867887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:38.949 [2024-11-18 20:39:50.867939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:38.949 [2024-11-18 20:39:50.867967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:38.949 [2024-11-18 20:39:50.867978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:38.949 [2024-11-18 20:39:50.867988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:38.949 [2024-11-18 20:39:50.868501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:38.949 [2024-11-18 20:39:50.950838] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:38.949 [2024-11-18 20:39:50.951174] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:39.208 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:39.208 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:39.208 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:39.208 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:39.208 20:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:39.208 20:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:39.208 20:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:39.466 [2024-11-18 20:39:51.263170] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:39.466 [2024-11-18 20:39:51.263304] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:39.466 [2024-11-18 20:39:51.263351] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:39.466 20:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:39.466 20:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e333e07b-7ca9-4d3b-89d3-98e91f243c8b 00:38:39.466 20:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e333e07b-7ca9-4d3b-89d3-98e91f243c8b 00:38:39.466 20:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:39.466 20:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:39.466 20:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:39.466 20:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:39.466 20:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:39.726 20:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e333e07b-7ca9-4d3b-89d3-98e91f243c8b -t 2000 00:38:39.987 [ 00:38:39.987 { 00:38:39.987 "name": "e333e07b-7ca9-4d3b-89d3-98e91f243c8b", 00:38:39.987 "aliases": [ 00:38:39.987 "lvs/lvol" 00:38:39.987 ], 00:38:39.987 "product_name": "Logical Volume", 00:38:39.987 "block_size": 4096, 00:38:39.987 "num_blocks": 38912, 00:38:39.987 "uuid": "e333e07b-7ca9-4d3b-89d3-98e91f243c8b", 00:38:39.987 "assigned_rate_limits": { 00:38:39.987 "rw_ios_per_sec": 0, 00:38:39.987 "rw_mbytes_per_sec": 0, 00:38:39.987 "r_mbytes_per_sec": 0, 00:38:39.987 "w_mbytes_per_sec": 0 00:38:39.987 }, 00:38:39.987 "claimed": false, 00:38:39.987 "zoned": false, 00:38:39.987 "supported_io_types": { 00:38:39.987 "read": true, 00:38:39.987 "write": true, 00:38:39.987 "unmap": true, 00:38:39.987 "flush": false, 00:38:39.987 "reset": true, 00:38:39.987 "nvme_admin": false, 00:38:39.987 "nvme_io": false, 00:38:39.987 "nvme_io_md": false, 00:38:39.987 "write_zeroes": true, 00:38:39.987 "zcopy": false, 00:38:39.987 "get_zone_info": false, 00:38:39.987 "zone_management": false, 00:38:39.987 "zone_append": false, 00:38:39.987 "compare": false, 00:38:39.987 "compare_and_write": false, 00:38:39.987 "abort": false, 00:38:39.987 "seek_hole": true, 00:38:39.987 "seek_data": true, 00:38:39.987 "copy": false, 00:38:39.987 "nvme_iov_md": false 00:38:39.987 }, 00:38:39.987 "driver_specific": { 00:38:39.987 "lvol": { 00:38:39.987 "lvol_store_uuid": "c159d6b4-803a-4e04-bcb6-225ffcec70d4", 00:38:39.987 "base_bdev": "aio_bdev", 00:38:39.987 "thin_provision": false, 00:38:39.987 "num_allocated_clusters": 38, 00:38:39.987 "snapshot": false, 00:38:39.987 "clone": false, 00:38:39.987 "esnap_clone": false 00:38:39.987 } 00:38:39.987 } 00:38:39.987 } 00:38:39.987 ] 00:38:39.987 20:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:39.987 20:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c159d6b4-803a-4e04-bcb6-225ffcec70d4 00:38:39.987 20:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:40.248 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:40.248 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c159d6b4-803a-4e04-bcb6-225ffcec70d4 00:38:40.248 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:40.507 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:40.507 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:40.766 [2024-11-18 20:39:52.653039] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:40.766 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c159d6b4-803a-4e04-bcb6-225ffcec70d4 00:38:40.766 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:40.766 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c159d6b4-803a-4e04-bcb6-225ffcec70d4 00:38:40.766 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:40.766 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:40.766 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:40.766 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:40.766 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:40.766 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:40.766 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:40.766 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:40.766 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c159d6b4-803a-4e04-bcb6-225ffcec70d4 00:38:41.024 request: 00:38:41.024 { 00:38:41.024 "uuid": "c159d6b4-803a-4e04-bcb6-225ffcec70d4", 00:38:41.024 "method": "bdev_lvol_get_lvstores", 00:38:41.024 "req_id": 1 00:38:41.024 } 00:38:41.024 Got JSON-RPC error response 00:38:41.024 response: 00:38:41.024 { 00:38:41.024 "code": -19, 00:38:41.024 "message": "No such device" 00:38:41.024 } 00:38:41.024 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:41.024 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:41.024 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:41.024 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:41.024 20:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:41.282 aio_bdev 00:38:41.282 20:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e333e07b-7ca9-4d3b-89d3-98e91f243c8b 00:38:41.282 20:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e333e07b-7ca9-4d3b-89d3-98e91f243c8b 00:38:41.282 20:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:41.282 20:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:41.282 20:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:41.282 20:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:41.282 20:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:41.540 20:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e333e07b-7ca9-4d3b-89d3-98e91f243c8b -t 2000 00:38:41.798 [ 00:38:41.798 { 00:38:41.798 "name": "e333e07b-7ca9-4d3b-89d3-98e91f243c8b", 00:38:41.798 "aliases": [ 00:38:41.798 "lvs/lvol" 00:38:41.798 ], 00:38:41.798 "product_name": "Logical Volume", 00:38:41.798 "block_size": 4096, 00:38:41.798 "num_blocks": 38912, 00:38:41.798 "uuid": "e333e07b-7ca9-4d3b-89d3-98e91f243c8b", 00:38:41.798 "assigned_rate_limits": { 00:38:41.798 "rw_ios_per_sec": 0, 00:38:41.798 "rw_mbytes_per_sec": 0, 00:38:41.798 "r_mbytes_per_sec": 0, 00:38:41.798 "w_mbytes_per_sec": 0 00:38:41.798 }, 00:38:41.798 "claimed": false, 00:38:41.798 "zoned": false, 00:38:41.798 "supported_io_types": { 00:38:41.798 "read": true, 00:38:41.798 "write": true, 00:38:41.798 "unmap": true, 00:38:41.798 "flush": false, 00:38:41.798 "reset": true, 00:38:41.798 "nvme_admin": false, 00:38:41.798 "nvme_io": false, 00:38:41.798 "nvme_io_md": false, 00:38:41.798 "write_zeroes": true, 00:38:41.798 "zcopy": false, 00:38:41.798 "get_zone_info": false, 00:38:41.798 "zone_management": false, 00:38:41.798 "zone_append": false, 00:38:41.798 "compare": false, 00:38:41.798 "compare_and_write": false, 00:38:41.798 "abort": false, 00:38:41.798 "seek_hole": true, 00:38:41.798 "seek_data": true, 00:38:41.798 "copy": false, 00:38:41.798 "nvme_iov_md": false 00:38:41.798 }, 00:38:41.798 "driver_specific": { 00:38:41.798 "lvol": { 00:38:41.798 "lvol_store_uuid": "c159d6b4-803a-4e04-bcb6-225ffcec70d4", 00:38:41.798 "base_bdev": "aio_bdev", 00:38:41.798 "thin_provision": false, 00:38:41.798 "num_allocated_clusters": 38, 00:38:41.798 "snapshot": false, 00:38:41.798 "clone": false, 00:38:41.798 "esnap_clone": false 00:38:41.798 } 00:38:41.798 } 00:38:41.798 } 00:38:41.798 ] 00:38:41.798 20:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:41.798 20:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c159d6b4-803a-4e04-bcb6-225ffcec70d4 00:38:41.798 20:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:42.058 20:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:42.058 20:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c159d6b4-803a-4e04-bcb6-225ffcec70d4 00:38:42.058 20:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:42.319 20:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:42.319 20:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e333e07b-7ca9-4d3b-89d3-98e91f243c8b 00:38:42.580 20:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c159d6b4-803a-4e04-bcb6-225ffcec70d4 00:38:43.151 20:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:43.151 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:43.412 00:38:43.412 real 0m19.451s 00:38:43.412 user 0m36.556s 00:38:43.412 sys 0m4.637s 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:43.412 ************************************ 00:38:43.412 END TEST lvs_grow_dirty 00:38:43.412 ************************************ 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:43.412 nvmf_trace.0 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:43.412 rmmod nvme_tcp 00:38:43.412 rmmod nvme_fabrics 00:38:43.412 rmmod nvme_keyring 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 430970 ']' 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 430970 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 430970 ']' 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 430970 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 430970 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 430970' 00:38:43.412 killing process with pid 430970 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 430970 00:38:43.412 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 430970 00:38:43.673 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:43.673 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:43.673 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:43.673 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:43.673 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:38:43.673 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:38:43.673 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:43.673 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:43.673 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:43.673 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.673 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.673 20:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.581 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:45.581 00:38:45.581 real 0m42.785s 00:38:45.581 user 0m55.713s 00:38:45.581 sys 0m8.490s 00:38:45.581 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:45.581 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:45.582 ************************************ 00:38:45.582 END TEST nvmf_lvs_grow 00:38:45.582 ************************************ 00:38:45.582 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:45.582 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:45.582 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:45.582 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:45.840 ************************************ 00:38:45.840 START TEST nvmf_bdev_io_wait 00:38:45.840 ************************************ 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:45.840 * Looking for test storage... 00:38:45.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:45.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.840 --rc genhtml_branch_coverage=1 00:38:45.840 --rc genhtml_function_coverage=1 00:38:45.840 --rc genhtml_legend=1 00:38:45.840 --rc geninfo_all_blocks=1 00:38:45.840 --rc geninfo_unexecuted_blocks=1 00:38:45.840 00:38:45.840 ' 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:45.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.840 --rc genhtml_branch_coverage=1 00:38:45.840 --rc genhtml_function_coverage=1 00:38:45.840 --rc genhtml_legend=1 00:38:45.840 --rc geninfo_all_blocks=1 00:38:45.840 --rc geninfo_unexecuted_blocks=1 00:38:45.840 00:38:45.840 ' 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:45.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.840 --rc genhtml_branch_coverage=1 00:38:45.840 --rc genhtml_function_coverage=1 00:38:45.840 --rc genhtml_legend=1 00:38:45.840 --rc geninfo_all_blocks=1 00:38:45.840 --rc geninfo_unexecuted_blocks=1 00:38:45.840 00:38:45.840 ' 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:45.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.840 --rc genhtml_branch_coverage=1 00:38:45.840 --rc genhtml_function_coverage=1 00:38:45.840 --rc genhtml_legend=1 00:38:45.840 --rc geninfo_all_blocks=1 00:38:45.840 --rc geninfo_unexecuted_blocks=1 00:38:45.840 00:38:45.840 ' 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:45.840 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:45.841 20:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:47.745 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:47.745 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:47.745 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:47.745 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:48.006 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:48.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:48.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:38:48.006 00:38:48.006 --- 10.0.0.2 ping statistics --- 00:38:48.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.006 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:48.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:48.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:38:48.006 00:38:48.006 --- 10.0.0.1 ping statistics --- 00:38:48.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.006 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=433485 00:38:48.006 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:48.007 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 433485 00:38:48.007 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 433485 ']' 00:38:48.007 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:48.007 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:48.007 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:48.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:48.007 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:48.007 20:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.007 [2024-11-18 20:39:59.955444] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:48.007 [2024-11-18 20:39:59.956529] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:48.007 [2024-11-18 20:39:59.956591] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:48.265 [2024-11-18 20:40:00.031121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:48.265 [2024-11-18 20:40:00.081611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:48.265 [2024-11-18 20:40:00.081690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:48.265 [2024-11-18 20:40:00.081705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:48.265 [2024-11-18 20:40:00.081717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:48.265 [2024-11-18 20:40:00.081727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:48.265 [2024-11-18 20:40:00.083198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:48.265 [2024-11-18 20:40:00.083257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:48.265 [2024-11-18 20:40:00.083344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:48.265 [2024-11-18 20:40:00.083350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.265 [2024-11-18 20:40:00.083876] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:48.265 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:48.265 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:48.265 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:48.265 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:48.265 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.265 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:48.265 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:48.265 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.265 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.265 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.265 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:48.265 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.265 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.525 [2024-11-18 20:40:00.289776] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:48.525 [2024-11-18 20:40:00.289977] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:48.525 [2024-11-18 20:40:00.290825] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:48.525 [2024-11-18 20:40:00.291535] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.525 [2024-11-18 20:40:00.296087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.525 Malloc0 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.525 [2024-11-18 20:40:00.352266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=433513 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=433515 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:48.525 { 00:38:48.525 "params": { 00:38:48.525 "name": "Nvme$subsystem", 00:38:48.525 "trtype": "$TEST_TRANSPORT", 00:38:48.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:48.525 "adrfam": "ipv4", 00:38:48.525 "trsvcid": "$NVMF_PORT", 00:38:48.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:48.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:48.525 "hdgst": ${hdgst:-false}, 00:38:48.525 "ddgst": ${ddgst:-false} 00:38:48.525 }, 00:38:48.525 "method": "bdev_nvme_attach_controller" 00:38:48.525 } 00:38:48.525 EOF 00:38:48.525 )") 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=433517 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:48.525 { 00:38:48.525 "params": { 00:38:48.525 "name": "Nvme$subsystem", 00:38:48.525 "trtype": "$TEST_TRANSPORT", 00:38:48.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:48.525 "adrfam": "ipv4", 00:38:48.525 "trsvcid": "$NVMF_PORT", 00:38:48.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:48.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:48.525 "hdgst": ${hdgst:-false}, 00:38:48.525 "ddgst": ${ddgst:-false} 00:38:48.525 }, 00:38:48.525 "method": "bdev_nvme_attach_controller" 00:38:48.525 } 00:38:48.525 EOF 00:38:48.525 )") 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=433520 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:48.525 { 00:38:48.525 "params": { 00:38:48.525 "name": "Nvme$subsystem", 00:38:48.525 "trtype": "$TEST_TRANSPORT", 00:38:48.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:48.525 "adrfam": "ipv4", 00:38:48.525 "trsvcid": "$NVMF_PORT", 00:38:48.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:48.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:48.525 "hdgst": ${hdgst:-false}, 00:38:48.525 "ddgst": ${ddgst:-false} 00:38:48.525 }, 00:38:48.525 "method": "bdev_nvme_attach_controller" 00:38:48.525 } 00:38:48.525 EOF 00:38:48.525 )") 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:48.525 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:48.526 { 00:38:48.526 "params": { 00:38:48.526 "name": "Nvme$subsystem", 00:38:48.526 "trtype": "$TEST_TRANSPORT", 00:38:48.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:48.526 "adrfam": "ipv4", 00:38:48.526 "trsvcid": "$NVMF_PORT", 00:38:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:48.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:48.526 "hdgst": ${hdgst:-false}, 00:38:48.526 "ddgst": ${ddgst:-false} 00:38:48.526 }, 00:38:48.526 "method": "bdev_nvme_attach_controller" 00:38:48.526 } 00:38:48.526 EOF 00:38:48.526 )") 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 433513 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:48.526 "params": { 00:38:48.526 "name": "Nvme1", 00:38:48.526 "trtype": "tcp", 00:38:48.526 "traddr": "10.0.0.2", 00:38:48.526 "adrfam": "ipv4", 00:38:48.526 "trsvcid": "4420", 00:38:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:48.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:48.526 "hdgst": false, 00:38:48.526 "ddgst": false 00:38:48.526 }, 00:38:48.526 "method": "bdev_nvme_attach_controller" 00:38:48.526 }' 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:48.526 "params": { 00:38:48.526 "name": "Nvme1", 00:38:48.526 "trtype": "tcp", 00:38:48.526 "traddr": "10.0.0.2", 00:38:48.526 "adrfam": "ipv4", 00:38:48.526 "trsvcid": "4420", 00:38:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:48.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:48.526 "hdgst": false, 00:38:48.526 "ddgst": false 00:38:48.526 }, 00:38:48.526 "method": "bdev_nvme_attach_controller" 00:38:48.526 }' 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:48.526 "params": { 00:38:48.526 "name": "Nvme1", 00:38:48.526 "trtype": "tcp", 00:38:48.526 "traddr": "10.0.0.2", 00:38:48.526 "adrfam": "ipv4", 00:38:48.526 "trsvcid": "4420", 00:38:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:48.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:48.526 "hdgst": false, 00:38:48.526 "ddgst": false 00:38:48.526 }, 00:38:48.526 "method": "bdev_nvme_attach_controller" 00:38:48.526 }' 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:48.526 20:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:48.526 "params": { 00:38:48.526 "name": "Nvme1", 00:38:48.526 "trtype": "tcp", 00:38:48.526 "traddr": "10.0.0.2", 00:38:48.526 "adrfam": "ipv4", 00:38:48.526 "trsvcid": "4420", 00:38:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:48.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:48.526 "hdgst": false, 00:38:48.526 "ddgst": false 00:38:48.526 }, 00:38:48.526 "method": "bdev_nvme_attach_controller" 00:38:48.526 }' 00:38:48.526 [2024-11-18 20:40:00.401066] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:48.526 [2024-11-18 20:40:00.401066] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:48.526 [2024-11-18 20:40:00.401159] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 20:40:00.401159] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:48.526 --proc-type=auto ] 00:38:48.526 [2024-11-18 20:40:00.402859] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:48.526 [2024-11-18 20:40:00.402861] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:48.526 [2024-11-18 20:40:00.402948] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 20:40:00.402949] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:48.526 --proc-type=auto ] 00:38:48.785 [2024-11-18 20:40:00.590111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.785 [2024-11-18 20:40:00.633737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:48.785 [2024-11-18 20:40:00.696838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.785 [2024-11-18 20:40:00.738901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:48.786 [2024-11-18 20:40:00.765731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.045 [2024-11-18 20:40:00.803792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:49.045 [2024-11-18 20:40:00.837092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.045 [2024-11-18 20:40:00.875565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:49.045 Running I/O for 1 seconds... 00:38:49.045 Running I/O for 1 seconds... 00:38:49.304 Running I/O for 1 seconds... 00:38:49.304 Running I/O for 1 seconds... 00:38:50.241 8977.00 IOPS, 35.07 MiB/s 00:38:50.241 Latency(us) 00:38:50.241 [2024-11-18T19:40:02.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.241 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:50.241 Nvme1n1 : 1.01 9020.19 35.24 0.00 0.00 14121.78 4247.70 16408.27 00:38:50.241 [2024-11-18T19:40:02.249Z] =================================================================================================================== 00:38:50.241 [2024-11-18T19:40:02.249Z] Total : 9020.19 35.24 0.00 0.00 14121.78 4247.70 16408.27 00:38:50.241 180160.00 IOPS, 703.75 MiB/s 00:38:50.241 Latency(us) 00:38:50.241 [2024-11-18T19:40:02.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.241 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:50.241 Nvme1n1 : 1.00 179800.67 702.35 0.00 0.00 708.01 326.16 2002.49 00:38:50.241 [2024-11-18T19:40:02.249Z] =================================================================================================================== 00:38:50.241 [2024-11-18T19:40:02.249Z] Total : 179800.67 702.35 0.00 0.00 708.01 326.16 2002.49 00:38:50.241 8574.00 IOPS, 33.49 MiB/s 00:38:50.241 Latency(us) 00:38:50.241 [2024-11-18T19:40:02.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.241 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:50.241 Nvme1n1 : 1.01 8641.39 33.76 0.00 0.00 14748.91 2063.17 19709.35 00:38:50.241 [2024-11-18T19:40:02.249Z] =================================================================================================================== 00:38:50.241 [2024-11-18T19:40:02.249Z] Total : 8641.39 33.76 0.00 0.00 14748.91 2063.17 19709.35 00:38:50.241 9254.00 IOPS, 36.15 MiB/s 00:38:50.241 Latency(us) 00:38:50.241 [2024-11-18T19:40:02.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.241 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:50.241 Nvme1n1 : 1.01 9337.65 36.48 0.00 0.00 13663.73 4296.25 19612.25 00:38:50.241 [2024-11-18T19:40:02.249Z] =================================================================================================================== 00:38:50.241 [2024-11-18T19:40:02.249Z] Total : 9337.65 36.48 0.00 0.00 13663.73 4296.25 19612.25 00:38:50.241 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 433515 00:38:50.241 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 433517 00:38:50.241 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 433520 00:38:50.241 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:50.241 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.241 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:50.500 rmmod nvme_tcp 00:38:50.500 rmmod nvme_fabrics 00:38:50.500 rmmod nvme_keyring 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 433485 ']' 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 433485 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 433485 ']' 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 433485 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 433485 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 433485' 00:38:50.500 killing process with pid 433485 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 433485 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 433485 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:50.500 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:38:50.758 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:50.758 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:50.758 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.758 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.758 20:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.666 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:52.666 00:38:52.666 real 0m6.943s 00:38:52.666 user 0m13.300s 00:38:52.666 sys 0m4.020s 00:38:52.666 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:52.666 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:52.666 ************************************ 00:38:52.666 END TEST nvmf_bdev_io_wait 00:38:52.666 ************************************ 00:38:52.666 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:52.666 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:52.666 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:52.666 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:52.666 ************************************ 00:38:52.666 START TEST nvmf_queue_depth 00:38:52.666 ************************************ 00:38:52.666 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:52.666 * Looking for test storage... 00:38:52.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:52.666 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:52.666 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:38:52.666 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:52.925 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:52.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.926 --rc genhtml_branch_coverage=1 00:38:52.926 --rc genhtml_function_coverage=1 00:38:52.926 --rc genhtml_legend=1 00:38:52.926 --rc geninfo_all_blocks=1 00:38:52.926 --rc geninfo_unexecuted_blocks=1 00:38:52.926 00:38:52.926 ' 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:52.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.926 --rc genhtml_branch_coverage=1 00:38:52.926 --rc genhtml_function_coverage=1 00:38:52.926 --rc genhtml_legend=1 00:38:52.926 --rc geninfo_all_blocks=1 00:38:52.926 --rc geninfo_unexecuted_blocks=1 00:38:52.926 00:38:52.926 ' 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:52.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.926 --rc genhtml_branch_coverage=1 00:38:52.926 --rc genhtml_function_coverage=1 00:38:52.926 --rc genhtml_legend=1 00:38:52.926 --rc geninfo_all_blocks=1 00:38:52.926 --rc geninfo_unexecuted_blocks=1 00:38:52.926 00:38:52.926 ' 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:52.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.926 --rc genhtml_branch_coverage=1 00:38:52.926 --rc genhtml_function_coverage=1 00:38:52.926 --rc genhtml_legend=1 00:38:52.926 --rc geninfo_all_blocks=1 00:38:52.926 --rc geninfo_unexecuted_blocks=1 00:38:52.926 00:38:52.926 ' 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:52.926 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:52.927 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:52.927 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:52.927 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:52.927 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:52.927 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:52.927 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.927 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:52.927 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.927 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:52.927 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:52.927 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:52.927 20:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:55.461 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:55.461 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:55.461 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:55.461 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:55.461 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:55.462 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:55.462 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:55.462 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:55.462 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:55.462 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:55.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:55.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:38:55.462 00:38:55.462 --- 10.0.0.2 ping statistics --- 00:38:55.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:55.463 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:38:55.463 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:55.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:55.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:38:55.463 00:38:55.463 --- 10.0.0.1 ping statistics --- 00:38:55.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:55.463 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:38:55.463 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:55.463 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:38:55.463 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:55.463 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:55.463 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:55.463 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:55.463 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:55.463 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:55.463 20:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=435745 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 435745 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 435745 ']' 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:55.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:55.463 [2024-11-18 20:40:07.051142] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:55.463 [2024-11-18 20:40:07.052197] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:55.463 [2024-11-18 20:40:07.052247] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:55.463 [2024-11-18 20:40:07.125302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:55.463 [2024-11-18 20:40:07.171050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:55.463 [2024-11-18 20:40:07.171094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:55.463 [2024-11-18 20:40:07.171123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:55.463 [2024-11-18 20:40:07.171133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:55.463 [2024-11-18 20:40:07.171143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:55.463 [2024-11-18 20:40:07.171575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:55.463 [2024-11-18 20:40:07.252515] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:55.463 [2024-11-18 20:40:07.252852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:55.463 [2024-11-18 20:40:07.304139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:55.463 Malloc0 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:55.463 [2024-11-18 20:40:07.364263] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=435770 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 435770 /var/tmp/bdevperf.sock 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 435770 ']' 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:55.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:55.463 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:55.463 [2024-11-18 20:40:07.409658] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:55.463 [2024-11-18 20:40:07.409730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435770 ] 00:38:55.724 [2024-11-18 20:40:07.479181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:55.724 [2024-11-18 20:40:07.526389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.724 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:55.724 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:55.724 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:55.724 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.724 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:55.984 NVMe0n1 00:38:55.984 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.984 20:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:55.984 Running I/O for 10 seconds... 00:38:58.298 8199.00 IOPS, 32.03 MiB/s [2024-11-18T19:40:11.240Z] 8678.00 IOPS, 33.90 MiB/s [2024-11-18T19:40:12.176Z] 8541.33 IOPS, 33.36 MiB/s [2024-11-18T19:40:13.118Z] 8673.00 IOPS, 33.88 MiB/s [2024-11-18T19:40:14.058Z] 8606.40 IOPS, 33.62 MiB/s [2024-11-18T19:40:14.997Z] 8594.83 IOPS, 33.57 MiB/s [2024-11-18T19:40:15.935Z] 8635.29 IOPS, 33.73 MiB/s [2024-11-18T19:40:17.313Z] 8681.88 IOPS, 33.91 MiB/s [2024-11-18T19:40:18.250Z] 8683.11 IOPS, 33.92 MiB/s [2024-11-18T19:40:18.250Z] 8708.90 IOPS, 34.02 MiB/s 00:39:06.242 Latency(us) 00:39:06.242 [2024-11-18T19:40:18.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.242 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:06.242 Verification LBA range: start 0x0 length 0x4000 00:39:06.242 NVMe0n1 : 10.07 8747.67 34.17 0.00 0.00 116602.25 15049.01 71070.15 00:39:06.242 [2024-11-18T19:40:18.250Z] =================================================================================================================== 00:39:06.242 [2024-11-18T19:40:18.250Z] Total : 8747.67 34.17 0.00 0.00 116602.25 15049.01 71070.15 00:39:06.242 { 00:39:06.242 "results": [ 00:39:06.242 { 00:39:06.242 "job": "NVMe0n1", 00:39:06.242 "core_mask": "0x1", 00:39:06.242 "workload": "verify", 00:39:06.242 "status": "finished", 00:39:06.242 "verify_range": { 00:39:06.242 "start": 0, 00:39:06.242 "length": 16384 00:39:06.242 }, 00:39:06.242 "queue_depth": 1024, 00:39:06.242 "io_size": 4096, 00:39:06.242 "runtime": 10.072742, 00:39:06.242 "iops": 8747.667715503881, 00:39:06.242 "mibps": 34.17057701368704, 00:39:06.242 "io_failed": 0, 00:39:06.242 "io_timeout": 0, 00:39:06.242 "avg_latency_us": 116602.24513791423, 00:39:06.242 "min_latency_us": 15049.007407407407, 00:39:06.242 "max_latency_us": 71070.15111111112 00:39:06.242 } 00:39:06.242 ], 00:39:06.242 "core_count": 1 00:39:06.242 } 00:39:06.242 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 435770 00:39:06.242 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 435770 ']' 00:39:06.242 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 435770 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 435770 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 435770' 00:39:06.243 killing process with pid 435770 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 435770 00:39:06.243 Received shutdown signal, test time was about 10.000000 seconds 00:39:06.243 00:39:06.243 Latency(us) 00:39:06.243 [2024-11-18T19:40:18.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.243 [2024-11-18T19:40:18.251Z] =================================================================================================================== 00:39:06.243 [2024-11-18T19:40:18.251Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 435770 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:06.243 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:06.243 rmmod nvme_tcp 00:39:06.243 rmmod nvme_fabrics 00:39:06.501 rmmod nvme_keyring 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 435745 ']' 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 435745 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 435745 ']' 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 435745 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 435745 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 435745' 00:39:06.501 killing process with pid 435745 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 435745 00:39:06.501 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 435745 00:39:06.759 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:06.759 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:06.759 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:06.759 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:06.759 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:39:06.759 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:06.759 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:39:06.759 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:06.759 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:06.759 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:06.759 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:06.759 20:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:08.766 00:39:08.766 real 0m15.966s 00:39:08.766 user 0m22.046s 00:39:08.766 sys 0m3.351s 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:08.766 ************************************ 00:39:08.766 END TEST nvmf_queue_depth 00:39:08.766 ************************************ 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:08.766 ************************************ 00:39:08.766 START TEST nvmf_target_multipath 00:39:08.766 ************************************ 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:08.766 * Looking for test storage... 00:39:08.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:08.766 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:08.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.767 --rc genhtml_branch_coverage=1 00:39:08.767 --rc genhtml_function_coverage=1 00:39:08.767 --rc genhtml_legend=1 00:39:08.767 --rc geninfo_all_blocks=1 00:39:08.767 --rc geninfo_unexecuted_blocks=1 00:39:08.767 00:39:08.767 ' 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:08.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.767 --rc genhtml_branch_coverage=1 00:39:08.767 --rc genhtml_function_coverage=1 00:39:08.767 --rc genhtml_legend=1 00:39:08.767 --rc geninfo_all_blocks=1 00:39:08.767 --rc geninfo_unexecuted_blocks=1 00:39:08.767 00:39:08.767 ' 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:08.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.767 --rc genhtml_branch_coverage=1 00:39:08.767 --rc genhtml_function_coverage=1 00:39:08.767 --rc genhtml_legend=1 00:39:08.767 --rc geninfo_all_blocks=1 00:39:08.767 --rc geninfo_unexecuted_blocks=1 00:39:08.767 00:39:08.767 ' 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:08.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.767 --rc genhtml_branch_coverage=1 00:39:08.767 --rc genhtml_function_coverage=1 00:39:08.767 --rc genhtml_legend=1 00:39:08.767 --rc geninfo_all_blocks=1 00:39:08.767 --rc geninfo_unexecuted_blocks=1 00:39:08.767 00:39:08.767 ' 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:08.767 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:08.768 20:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:11.304 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:11.304 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:11.304 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:11.304 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:11.305 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:11.305 20:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:11.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:11.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:39:11.305 00:39:11.305 --- 10.0.0.2 ping statistics --- 00:39:11.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.305 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:11.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:11.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:39:11.305 00:39:11.305 --- 10.0.0.1 ping statistics --- 00:39:11.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.305 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:11.305 only one NIC for nvmf test 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:11.305 rmmod nvme_tcp 00:39:11.305 rmmod nvme_fabrics 00:39:11.305 rmmod nvme_keyring 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:11.305 20:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:13.223 00:39:13.223 real 0m4.589s 00:39:13.223 user 0m0.938s 00:39:13.223 sys 0m1.634s 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:13.223 ************************************ 00:39:13.223 END TEST nvmf_target_multipath 00:39:13.223 ************************************ 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:13.223 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:13.482 ************************************ 00:39:13.482 START TEST nvmf_zcopy 00:39:13.482 ************************************ 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:13.482 * Looking for test storage... 00:39:13.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:13.482 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:13.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.482 --rc genhtml_branch_coverage=1 00:39:13.482 --rc genhtml_function_coverage=1 00:39:13.482 --rc genhtml_legend=1 00:39:13.482 --rc geninfo_all_blocks=1 00:39:13.483 --rc geninfo_unexecuted_blocks=1 00:39:13.483 00:39:13.483 ' 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:13.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.483 --rc genhtml_branch_coverage=1 00:39:13.483 --rc genhtml_function_coverage=1 00:39:13.483 --rc genhtml_legend=1 00:39:13.483 --rc geninfo_all_blocks=1 00:39:13.483 --rc geninfo_unexecuted_blocks=1 00:39:13.483 00:39:13.483 ' 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:13.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.483 --rc genhtml_branch_coverage=1 00:39:13.483 --rc genhtml_function_coverage=1 00:39:13.483 --rc genhtml_legend=1 00:39:13.483 --rc geninfo_all_blocks=1 00:39:13.483 --rc geninfo_unexecuted_blocks=1 00:39:13.483 00:39:13.483 ' 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:13.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.483 --rc genhtml_branch_coverage=1 00:39:13.483 --rc genhtml_function_coverage=1 00:39:13.483 --rc genhtml_legend=1 00:39:13.483 --rc geninfo_all_blocks=1 00:39:13.483 --rc geninfo_unexecuted_blocks=1 00:39:13.483 00:39:13.483 ' 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:13.483 20:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:16.020 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:16.021 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:16.021 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:16.021 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:16.021 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:16.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:16.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:39:16.021 00:39:16.021 --- 10.0.0.2 ping statistics --- 00:39:16.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:16.021 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:16.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:16.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:39:16.021 00:39:16.021 --- 10.0.0.1 ping statistics --- 00:39:16.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:16.021 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:16.021 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=440948 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 440948 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 440948 ']' 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:16.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:16.022 [2024-11-18 20:40:27.667359] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:16.022 [2024-11-18 20:40:27.668425] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:39:16.022 [2024-11-18 20:40:27.668491] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:16.022 [2024-11-18 20:40:27.739695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.022 [2024-11-18 20:40:27.783716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:16.022 [2024-11-18 20:40:27.783770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:16.022 [2024-11-18 20:40:27.783793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:16.022 [2024-11-18 20:40:27.783803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:16.022 [2024-11-18 20:40:27.783813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:16.022 [2024-11-18 20:40:27.784388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:16.022 [2024-11-18 20:40:27.865547] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:16.022 [2024-11-18 20:40:27.865874] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:16.022 [2024-11-18 20:40:27.924935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:16.022 [2024-11-18 20:40:27.941080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:16.022 malloc0 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:16.022 { 00:39:16.022 "params": { 00:39:16.022 "name": "Nvme$subsystem", 00:39:16.022 "trtype": "$TEST_TRANSPORT", 00:39:16.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:16.022 "adrfam": "ipv4", 00:39:16.022 "trsvcid": "$NVMF_PORT", 00:39:16.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:16.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:16.022 "hdgst": ${hdgst:-false}, 00:39:16.022 "ddgst": ${ddgst:-false} 00:39:16.022 }, 00:39:16.022 "method": "bdev_nvme_attach_controller" 00:39:16.022 } 00:39:16.022 EOF 00:39:16.022 )") 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:16.022 20:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:16.022 "params": { 00:39:16.022 "name": "Nvme1", 00:39:16.022 "trtype": "tcp", 00:39:16.022 "traddr": "10.0.0.2", 00:39:16.022 "adrfam": "ipv4", 00:39:16.022 "trsvcid": "4420", 00:39:16.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:16.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:16.022 "hdgst": false, 00:39:16.022 "ddgst": false 00:39:16.022 }, 00:39:16.022 "method": "bdev_nvme_attach_controller" 00:39:16.022 }' 00:39:16.022 [2024-11-18 20:40:28.018520] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:39:16.022 [2024-11-18 20:40:28.018609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440968 ] 00:39:16.282 [2024-11-18 20:40:28.087328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.282 [2024-11-18 20:40:28.133143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.542 Running I/O for 10 seconds... 00:39:18.856 4951.00 IOPS, 38.68 MiB/s [2024-11-18T19:40:31.803Z] 5007.00 IOPS, 39.12 MiB/s [2024-11-18T19:40:32.744Z] 5013.33 IOPS, 39.17 MiB/s [2024-11-18T19:40:33.685Z] 5021.50 IOPS, 39.23 MiB/s [2024-11-18T19:40:34.624Z] 5036.80 IOPS, 39.35 MiB/s [2024-11-18T19:40:35.560Z] 5035.50 IOPS, 39.34 MiB/s [2024-11-18T19:40:36.498Z] 5039.00 IOPS, 39.37 MiB/s [2024-11-18T19:40:37.874Z] 5044.38 IOPS, 39.41 MiB/s [2024-11-18T19:40:38.816Z] 5050.56 IOPS, 39.46 MiB/s [2024-11-18T19:40:38.816Z] 5048.80 IOPS, 39.44 MiB/s 00:39:26.808 Latency(us) 00:39:26.808 [2024-11-18T19:40:38.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:26.808 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:26.808 Verification LBA range: start 0x0 length 0x1000 00:39:26.808 Nvme1n1 : 10.01 5053.34 39.48 0.00 0.00 25264.52 801.00 33010.73 00:39:26.808 [2024-11-18T19:40:38.816Z] =================================================================================================================== 00:39:26.808 [2024-11-18T19:40:38.816Z] Total : 5053.34 39.48 0.00 0.00 25264.52 801.00 33010.73 00:39:26.808 20:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=442269 00:39:26.808 20:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:26.808 20:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:26.808 20:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:26.808 20:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:26.808 20:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:26.808 20:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:26.808 20:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:26.808 20:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:26.808 { 00:39:26.808 "params": { 00:39:26.808 "name": "Nvme$subsystem", 00:39:26.808 "trtype": "$TEST_TRANSPORT", 00:39:26.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:26.808 "adrfam": "ipv4", 00:39:26.808 "trsvcid": "$NVMF_PORT", 00:39:26.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:26.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:26.808 "hdgst": ${hdgst:-false}, 00:39:26.808 "ddgst": ${ddgst:-false} 00:39:26.808 }, 00:39:26.808 "method": "bdev_nvme_attach_controller" 00:39:26.808 } 00:39:26.808 EOF 00:39:26.808 )") 00:39:26.808 20:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:26.808 [2024-11-18 20:40:38.700916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.700974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.808 20:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:26.808 20:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:26.808 20:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:26.808 "params": { 00:39:26.808 "name": "Nvme1", 00:39:26.808 "trtype": "tcp", 00:39:26.808 "traddr": "10.0.0.2", 00:39:26.808 "adrfam": "ipv4", 00:39:26.808 "trsvcid": "4420", 00:39:26.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:26.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:26.808 "hdgst": false, 00:39:26.808 "ddgst": false 00:39:26.808 }, 00:39:26.808 "method": "bdev_nvme_attach_controller" 00:39:26.808 }' 00:39:26.808 [2024-11-18 20:40:38.708831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.708855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.808 [2024-11-18 20:40:38.716829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.716851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.808 [2024-11-18 20:40:38.724825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.724846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.808 [2024-11-18 20:40:38.732827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.732848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.808 [2024-11-18 20:40:38.739723] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:39:26.808 [2024-11-18 20:40:38.739783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442269 ] 00:39:26.808 [2024-11-18 20:40:38.740829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.740852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.808 [2024-11-18 20:40:38.748830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.748854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.808 [2024-11-18 20:40:38.756827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.756848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.808 [2024-11-18 20:40:38.764827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.764848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.808 [2024-11-18 20:40:38.772823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.772843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.808 [2024-11-18 20:40:38.780829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.780851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.808 [2024-11-18 20:40:38.788827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.788848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.808 [2024-11-18 20:40:38.796825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.796847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.808 [2024-11-18 20:40:38.804824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.804844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.808 [2024-11-18 20:40:38.808579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.808 [2024-11-18 20:40:38.812833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.808 [2024-11-18 20:40:38.812855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.820879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.820918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.828853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.828883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.836833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.836855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.844827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.844848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.852825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.852855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.857841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:27.070 [2024-11-18 20:40:38.860825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.860845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.868823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.868843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.876875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.876914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.884880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.884921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.892883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.892926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.900881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.900921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.908883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.908923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.916879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.916935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.924840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.924865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.932867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.932902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.940881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.940920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.948877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.948918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.956832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.956855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.964829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.964850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.972834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.972862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.980831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.980855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.988830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.988854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:38.996832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:38.996856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:39.004825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:39.004847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:39.012824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:39.012846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:39.020825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:39.020846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:39.028825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:39.028846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:39.036827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:39.036850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:39.044849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:39.044872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:39.052826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:39.052849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:39.060826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:39.060847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:39.068824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:39.068846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.070 [2024-11-18 20:40:39.076841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.070 [2024-11-18 20:40:39.076862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.084839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.084861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.092848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.092872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.100841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.100862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.108841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.108862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.116826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.116862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.124827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.124849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.132830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.132853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.140828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.140851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.148828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.148850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.156827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.156849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.164826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.164848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.172840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.172861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.180826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.180849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.188850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.188876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.196831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.196855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 Running I/O for 5 seconds... 00:39:27.331 [2024-11-18 20:40:39.213216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.213242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.222899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.222941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.237841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.237868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.247865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.247905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.260359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.260385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.275132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.275158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.292869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.292896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.303590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.303615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.318282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.318309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.331 [2024-11-18 20:40:39.327805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.331 [2024-11-18 20:40:39.327832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.590 [2024-11-18 20:40:39.341800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.590 [2024-11-18 20:40:39.341827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.351113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.351138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.367054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.367088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.383007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.383034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.400855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.400883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.410441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.410482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.426583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.426610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.445628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.445663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.463090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.463115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.481023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.481065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.491794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.491821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.507050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.507076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.522566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.522593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.541133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.541158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.552128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.552153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.564928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.564969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.574175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.574201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.591 [2024-11-18 20:40:39.589993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.591 [2024-11-18 20:40:39.590020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.599361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.599402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.613658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.613711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.623157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.623183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.637544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.637579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.647245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.647272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.661351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.661376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.671122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.671148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.687087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.687114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.702814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.702841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.719512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.719553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.734867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.734895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.752615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.752667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.761988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.762013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.777629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.777679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.787057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.787081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.802255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.802281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.821026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.821052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.830794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.830821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.851 [2024-11-18 20:40:39.847497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.851 [2024-11-18 20:40:39.847523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.111 [2024-11-18 20:40:39.863224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.111 [2024-11-18 20:40:39.863252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.111 [2024-11-18 20:40:39.878778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.111 [2024-11-18 20:40:39.878806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.111 [2024-11-18 20:40:39.896580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.111 [2024-11-18 20:40:39.896606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.111 [2024-11-18 20:40:39.905839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.111 [2024-11-18 20:40:39.905872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.111 [2024-11-18 20:40:39.921235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.111 [2024-11-18 20:40:39.921260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.111 [2024-11-18 20:40:39.930137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.111 [2024-11-18 20:40:39.930162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.111 [2024-11-18 20:40:39.942367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.111 [2024-11-18 20:40:39.942391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.111 [2024-11-18 20:40:39.957712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.111 [2024-11-18 20:40:39.957740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.111 [2024-11-18 20:40:39.966875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.112 [2024-11-18 20:40:39.966903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.112 [2024-11-18 20:40:39.981226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.112 [2024-11-18 20:40:39.981266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.112 [2024-11-18 20:40:39.990436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.112 [2024-11-18 20:40:39.990462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.112 [2024-11-18 20:40:40.006169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.112 [2024-11-18 20:40:40.006200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.112 [2024-11-18 20:40:40.025626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.112 [2024-11-18 20:40:40.025670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.112 [2024-11-18 20:40:40.045411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.112 [2024-11-18 20:40:40.045460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.112 [2024-11-18 20:40:40.064081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.112 [2024-11-18 20:40:40.064130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.112 [2024-11-18 20:40:40.076606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.112 [2024-11-18 20:40:40.076656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.112 [2024-11-18 20:40:40.085995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.112 [2024-11-18 20:40:40.086022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.112 [2024-11-18 20:40:40.101822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.112 [2024-11-18 20:40:40.101848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.112 [2024-11-18 20:40:40.111103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.112 [2024-11-18 20:40:40.111129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.125766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.125794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.135805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.135832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.151022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.151048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.166861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.166888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.184871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.184898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.194783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.194824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 11662.00 IOPS, 91.11 MiB/s [2024-11-18T19:40:40.378Z] [2024-11-18 20:40:40.210907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.210946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.229117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.229143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.240045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.240070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.251034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.251059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.266997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.267023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.284786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.284812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.294046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.294072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.306219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.306245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.317387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.317412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.328776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.328802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.339783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.339807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.354464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.354490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.370 [2024-11-18 20:40:40.363878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.370 [2024-11-18 20:40:40.363904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.378423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.378450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.387925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.387950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.401965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.402006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.412627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.412687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.424010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.424035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.436088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.436115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.445820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.445848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.457634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.457682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.468332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.468358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.481148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.481173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.490564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.490589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.501825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.501851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.512500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.512525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.523355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.523381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.538038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.538063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.547224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.547251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.559021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.559046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.574612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.574648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.583858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.583885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.599855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.599890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.614330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.614357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.624326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.624367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.629 [2024-11-18 20:40:40.636294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.629 [2024-11-18 20:40:40.636334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.651710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.651753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.666940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.666968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.676015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.676042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.690162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.690191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.700098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.700124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.714265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.714291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.723743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.723770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.737775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.737801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.747139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.747164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.761848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.761874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.771123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.771149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.786171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.786197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.795785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.795813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.809627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.809665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.819149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.819176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.831084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.831108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.845122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.845150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.854966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.855001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.866932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.866959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.882496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.882537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.889 [2024-11-18 20:40:40.892319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.889 [2024-11-18 20:40:40.892344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.149 [2024-11-18 20:40:40.904205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.149 [2024-11-18 20:40:40.904231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.149 [2024-11-18 20:40:40.918229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.149 [2024-11-18 20:40:40.918256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.149 [2024-11-18 20:40:40.927677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.149 [2024-11-18 20:40:40.927705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.149 [2024-11-18 20:40:40.941199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.149 [2024-11-18 20:40:40.941224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.149 [2024-11-18 20:40:40.951006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.149 [2024-11-18 20:40:40.951032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.149 [2024-11-18 20:40:40.965389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.149 [2024-11-18 20:40:40.965414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.149 [2024-11-18 20:40:40.975353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.149 [2024-11-18 20:40:40.975378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.149 [2024-11-18 20:40:40.988760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.149 [2024-11-18 20:40:40.988786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.149 [2024-11-18 20:40:40.998070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.149 [2024-11-18 20:40:40.998095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.149 [2024-11-18 20:40:41.010012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.149 [2024-11-18 20:40:41.010036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.149 [2024-11-18 20:40:41.024759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.150 [2024-11-18 20:40:41.024787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.150 [2024-11-18 20:40:41.034150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.150 [2024-11-18 20:40:41.034176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.150 [2024-11-18 20:40:41.045776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.150 [2024-11-18 20:40:41.045803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.150 [2024-11-18 20:40:41.056304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.150 [2024-11-18 20:40:41.056329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.150 [2024-11-18 20:40:41.067534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.150 [2024-11-18 20:40:41.067561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.150 [2024-11-18 20:40:41.083070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.150 [2024-11-18 20:40:41.083104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.150 [2024-11-18 20:40:41.093000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.150 [2024-11-18 20:40:41.093025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.150 [2024-11-18 20:40:41.105076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.150 [2024-11-18 20:40:41.105102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.150 [2024-11-18 20:40:41.115829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.150 [2024-11-18 20:40:41.115855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.150 [2024-11-18 20:40:41.130895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.150 [2024-11-18 20:40:41.130936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.150 [2024-11-18 20:40:41.140644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.150 [2024-11-18 20:40:41.140687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.150 [2024-11-18 20:40:41.152497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.150 [2024-11-18 20:40:41.152523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.163290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.163316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.178782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.178808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.188608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.188634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.200169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.200194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 11677.50 IOPS, 91.23 MiB/s [2024-11-18T19:40:41.417Z] [2024-11-18 20:40:41.213512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.213540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.223111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.223137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.238448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.238473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.248128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.248154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.259889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.259930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.272520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.272547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.281731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.281757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.293495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.293521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.304425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.304458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.317209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.317236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.326422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.326448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.338200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.338225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.352527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.352555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.361889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.361928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.373926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.373967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.390352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.390377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.399799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.399826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.409 [2024-11-18 20:40:41.416285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.409 [2024-11-18 20:40:41.416311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.667 [2024-11-18 20:40:41.425991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.667 [2024-11-18 20:40:41.426016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.437508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.437534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.446769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.446794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.458559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.458585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.473808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.473833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.483465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.483489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.497992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.498018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.508180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.508204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.519818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.519843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.533671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.533713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.543248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.543275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.558109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.558134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.567767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.567792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.583458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.583482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.598999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.599025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.608501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.608526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.620198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.620223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.630870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.630912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.646570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.646595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.656135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.656160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.668 [2024-11-18 20:40:41.668168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.668 [2024-11-18 20:40:41.668193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.926 [2024-11-18 20:40:41.679226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.679267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.694744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.694771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.704184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.704226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.715666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.715705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.730394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.730422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.739385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.739409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.753315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.753340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.762521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.762546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.774546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.774570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.790881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.790907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.800620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.800671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.812831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.812858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.823403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.823450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.837625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.837662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.847324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.847350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.861417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.861443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.870380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.870406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.882539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.882564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.897390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.897416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.906963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.906988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.927 [2024-11-18 20:40:41.918705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.927 [2024-11-18 20:40:41.918731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.185 [2024-11-18 20:40:41.934529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.185 [2024-11-18 20:40:41.934556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.185 [2024-11-18 20:40:41.944123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.185 [2024-11-18 20:40:41.944149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.185 [2024-11-18 20:40:41.956020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.185 [2024-11-18 20:40:41.956045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.185 [2024-11-18 20:40:41.970529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.185 [2024-11-18 20:40:41.970570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.185 [2024-11-18 20:40:41.980168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.185 [2024-11-18 20:40:41.980194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.185 [2024-11-18 20:40:41.992185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.185 [2024-11-18 20:40:41.992210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.185 [2024-11-18 20:40:42.008000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.008037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.022602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.022660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.032127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.032152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.043889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.043915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.056503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.056530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.065948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.065988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.077523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.077547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.087673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.087700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.102444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.102468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.111573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.111597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.126288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.126329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.136068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.136094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.147648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.147673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.162579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.162605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.172603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.172654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.186 [2024-11-18 20:40:42.184400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.186 [2024-11-18 20:40:42.184424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.444 [2024-11-18 20:40:42.195566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.444 [2024-11-18 20:40:42.195605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.444 [2024-11-18 20:40:42.209925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.444 [2024-11-18 20:40:42.209973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.444 11692.67 IOPS, 91.35 MiB/s [2024-11-18T19:40:42.452Z] [2024-11-18 20:40:42.219533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.444 [2024-11-18 20:40:42.219557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.444 [2024-11-18 20:40:42.233117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.444 [2024-11-18 20:40:42.233142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.444 [2024-11-18 20:40:42.242477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.444 [2024-11-18 20:40:42.242502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.444 [2024-11-18 20:40:42.257629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.444 [2024-11-18 20:40:42.257662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.444 [2024-11-18 20:40:42.266968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.444 [2024-11-18 20:40:42.266993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.444 [2024-11-18 20:40:42.278329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.444 [2024-11-18 20:40:42.278367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.444 [2024-11-18 20:40:42.294385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.444 [2024-11-18 20:40:42.294410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.444 [2024-11-18 20:40:42.303954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.444 [2024-11-18 20:40:42.303979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.444 [2024-11-18 20:40:42.319505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.445 [2024-11-18 20:40:42.319530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.445 [2024-11-18 20:40:42.332369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.445 [2024-11-18 20:40:42.332396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.445 [2024-11-18 20:40:42.346124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.445 [2024-11-18 20:40:42.346151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.445 [2024-11-18 20:40:42.355657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.445 [2024-11-18 20:40:42.355683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.445 [2024-11-18 20:40:42.370033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.445 [2024-11-18 20:40:42.370058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.445 [2024-11-18 20:40:42.379158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.445 [2024-11-18 20:40:42.379182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.445 [2024-11-18 20:40:42.393766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.445 [2024-11-18 20:40:42.393793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.445 [2024-11-18 20:40:42.403335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.445 [2024-11-18 20:40:42.403361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.445 [2024-11-18 20:40:42.418188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.445 [2024-11-18 20:40:42.418213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.445 [2024-11-18 20:40:42.427875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.445 [2024-11-18 20:40:42.427901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.445 [2024-11-18 20:40:42.444023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.445 [2024-11-18 20:40:42.444060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.453773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.453822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.465528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.465567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.476115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.476139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.492030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.492054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.502124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.502149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.517923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.517951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.527763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.527805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.542241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.542267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.551844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.551870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.566909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.566936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.581025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.581051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.590930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.590956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.602987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.603013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.618885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.618911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.628513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.628551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.640582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.640607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.653073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.653115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.662679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.662704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.674100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.674131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.688697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.688724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.704 [2024-11-18 20:40:42.698283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.704 [2024-11-18 20:40:42.698312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.714413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.714439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.732381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.732406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.741779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.741805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.753699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.753725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.764293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.764317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.777327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.777352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.786799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.786825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.798842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.798869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.814055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.814095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.823358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.823383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.837544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.837569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.847107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.847132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.862580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.862606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.880371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.880398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.890300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.890325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.901757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.901783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.912645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.912696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.923557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.923597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.938565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.938606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.948158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.948183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:30.963 [2024-11-18 20:40:42.960323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:30.963 [2024-11-18 20:40:42.960350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:42.972886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:42.972913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:42.982116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:42.982155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:42.993708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:42.993733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.004278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.004302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.019120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.019145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.028756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.028782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.040430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.040471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.052240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.052266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.066530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.066556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.075726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.075752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.091272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.091297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.108663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.108689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.118362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.118403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.130285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.130310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.144972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.144997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.154932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.154957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.166798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.166824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.182035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.182062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.191766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.191792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.205414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.205438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 11713.75 IOPS, 91.51 MiB/s [2024-11-18T19:40:43.231Z] [2024-11-18 20:40:43.214302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.214342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.223 [2024-11-18 20:40:43.226785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.223 [2024-11-18 20:40:43.226810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.483 [2024-11-18 20:40:43.242030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.483 [2024-11-18 20:40:43.242055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.483 [2024-11-18 20:40:43.251790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.483 [2024-11-18 20:40:43.251819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.483 [2024-11-18 20:40:43.266851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.483 [2024-11-18 20:40:43.266878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.483 [2024-11-18 20:40:43.284961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.483 [2024-11-18 20:40:43.284987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.483 [2024-11-18 20:40:43.295429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.483 [2024-11-18 20:40:43.295453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.483 [2024-11-18 20:40:43.309515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.483 [2024-11-18 20:40:43.309542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.483 [2024-11-18 20:40:43.318772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.484 [2024-11-18 20:40:43.318799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.484 [2024-11-18 20:40:43.330550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.484 [2024-11-18 20:40:43.330574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.484 [2024-11-18 20:40:43.346272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.484 [2024-11-18 20:40:43.346298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.484 [2024-11-18 20:40:43.355453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.484 [2024-11-18 20:40:43.355493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.484 [2024-11-18 20:40:43.370296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.484 [2024-11-18 20:40:43.370337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.484 [2024-11-18 20:40:43.379869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.484 [2024-11-18 20:40:43.379896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.484 [2024-11-18 20:40:43.393845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.484 [2024-11-18 20:40:43.393871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.484 [2024-11-18 20:40:43.403334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.484 [2024-11-18 20:40:43.403359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.484 [2024-11-18 20:40:43.418399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.484 [2024-11-18 20:40:43.418425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.484 [2024-11-18 20:40:43.428011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.484 [2024-11-18 20:40:43.428034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.484 [2024-11-18 20:40:43.441768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.484 [2024-11-18 20:40:43.441793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.484 [2024-11-18 20:40:43.451545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.484 [2024-11-18 20:40:43.451569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.484 [2024-11-18 20:40:43.467557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.484 [2024-11-18 20:40:43.467582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.484 [2024-11-18 20:40:43.482381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.484 [2024-11-18 20:40:43.482423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.491679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.491706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.507457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.507481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.522819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.522859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.531936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.531960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.545730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.545771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.554737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.554761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.566341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.566365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.582030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.582055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.591524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.591549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.605557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.605588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.615174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.615198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.629460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.629485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.638778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.638803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.650236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.650262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.664982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.665007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.674699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.674738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.686596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.686621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.702813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.702839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.711898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.743 [2024-11-18 20:40:43.711940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.743 [2024-11-18 20:40:43.725683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.744 [2024-11-18 20:40:43.725710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.744 [2024-11-18 20:40:43.735071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.744 [2024-11-18 20:40:43.735095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:31.744 [2024-11-18 20:40:43.746941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:31.744 [2024-11-18 20:40:43.746965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.761842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.761870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.771093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.771117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.783061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.783099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.798014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.798040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.807508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.807533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.819166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.819191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.835512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.835545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.850890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.850916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.869293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.869319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.878839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.878864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.890438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.890462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.908030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.908056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.923254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.923280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.940998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.941023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.950293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.950334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.961885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.961925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.972495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.972519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.982962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.982988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:43.998575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:43.998600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.002 [2024-11-18 20:40:44.008166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.002 [2024-11-18 20:40:44.008191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.019879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.019906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.035233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.035258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.050743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.050785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.060047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.060071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.073620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.073656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.083383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.083431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.097064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.097091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.107126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.107151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.118819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.118845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.134482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.134506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.143959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.144000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.158118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.158143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.167072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.167096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.182832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.182859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.192513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.192537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.204536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.204562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 11730.40 IOPS, 91.64 MiB/s [2024-11-18T19:40:44.269Z] [2024-11-18 20:40:44.215818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.215844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.224836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.224861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 00:39:32.261 Latency(us) 00:39:32.261 [2024-11-18T19:40:44.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:32.261 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:32.261 Nvme1n1 : 5.01 11731.79 91.65 0.00 0.00 10897.47 2803.48 18252.99 00:39:32.261 [2024-11-18T19:40:44.269Z] =================================================================================================================== 00:39:32.261 [2024-11-18T19:40:44.269Z] Total : 11731.79 91.65 0.00 0.00 10897.47 2803.48 18252.99 00:39:32.261 [2024-11-18 20:40:44.232833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.232858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.240847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.240872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.248904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.248954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.256891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.256947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.261 [2024-11-18 20:40:44.264890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.261 [2024-11-18 20:40:44.264935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.272899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.272952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.280891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.280948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.288890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.288935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.296903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.296960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.304896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.304953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.312902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.312960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.320898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.320954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.328895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.328952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.336891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.336947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.344890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.344946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.352860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.352895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.360830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.360852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.368890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.368944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.376891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.376946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.384844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.384869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.392825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.392845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 [2024-11-18 20:40:44.400826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:32.520 [2024-11-18 20:40:44.400847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (442269) - No such process 00:39:32.520 20:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 442269 00:39:32.520 20:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.520 20:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.520 20:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:32.520 20:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.520 20:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:32.520 20:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.520 20:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:32.520 delay0 00:39:32.520 20:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.520 20:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:32.520 20:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.520 20:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:32.520 20:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.520 20:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:32.778 [2024-11-18 20:40:44.558789] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:40.905 Initializing NVMe Controllers 00:39:40.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:40.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:40.905 Initialization complete. Launching workers. 00:39:40.905 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 235, failed: 20457 00:39:40.905 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20557, failed to submit 135 00:39:40.905 success 20494, unsuccessful 63, failed 0 00:39:40.905 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:40.905 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:40.905 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:40.905 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:40.905 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:40.905 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:40.905 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:40.905 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:40.905 rmmod nvme_tcp 00:39:40.905 rmmod nvme_fabrics 00:39:40.905 rmmod nvme_keyring 00:39:40.905 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:40.905 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 440948 ']' 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 440948 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 440948 ']' 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 440948 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 440948 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 440948' 00:39:40.906 killing process with pid 440948 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 440948 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 440948 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:40.906 20:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:42.282 20:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:42.282 00:39:42.282 real 0m28.706s 00:39:42.282 user 0m39.180s 00:39:42.282 sys 0m10.656s 00:39:42.282 20:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:42.282 20:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:42.282 ************************************ 00:39:42.282 END TEST nvmf_zcopy 00:39:42.282 ************************************ 00:39:42.282 20:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:42.282 20:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:42.282 20:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:42.282 20:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:42.282 ************************************ 00:39:42.282 START TEST nvmf_nmic 00:39:42.282 ************************************ 00:39:42.282 20:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:42.282 * Looking for test storage... 00:39:42.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:42.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.282 --rc genhtml_branch_coverage=1 00:39:42.282 --rc genhtml_function_coverage=1 00:39:42.282 --rc genhtml_legend=1 00:39:42.282 --rc geninfo_all_blocks=1 00:39:42.282 --rc geninfo_unexecuted_blocks=1 00:39:42.282 00:39:42.282 ' 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:42.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.282 --rc genhtml_branch_coverage=1 00:39:42.282 --rc genhtml_function_coverage=1 00:39:42.282 --rc genhtml_legend=1 00:39:42.282 --rc geninfo_all_blocks=1 00:39:42.282 --rc geninfo_unexecuted_blocks=1 00:39:42.282 00:39:42.282 ' 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:42.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.282 --rc genhtml_branch_coverage=1 00:39:42.282 --rc genhtml_function_coverage=1 00:39:42.282 --rc genhtml_legend=1 00:39:42.282 --rc geninfo_all_blocks=1 00:39:42.282 --rc geninfo_unexecuted_blocks=1 00:39:42.282 00:39:42.282 ' 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:42.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.282 --rc genhtml_branch_coverage=1 00:39:42.282 --rc genhtml_function_coverage=1 00:39:42.282 --rc genhtml_legend=1 00:39:42.282 --rc geninfo_all_blocks=1 00:39:42.282 --rc geninfo_unexecuted_blocks=1 00:39:42.282 00:39:42.282 ' 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:42.282 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:42.283 20:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:44.820 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:44.820 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:44.820 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:44.820 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:44.821 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:44.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:44.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:39:44.821 00:39:44.821 --- 10.0.0.2 ping statistics --- 00:39:44.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:44.821 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:44.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:44.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:39:44.821 00:39:44.821 --- 10.0.0.1 ping statistics --- 00:39:44.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:44.821 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=445643 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 445643 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 445643 ']' 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:44.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:44.821 [2024-11-18 20:40:56.537424] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:44.821 [2024-11-18 20:40:56.538513] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:39:44.821 [2024-11-18 20:40:56.538568] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:44.821 [2024-11-18 20:40:56.609845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:44.821 [2024-11-18 20:40:56.656817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:44.821 [2024-11-18 20:40:56.656867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:44.821 [2024-11-18 20:40:56.656881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:44.821 [2024-11-18 20:40:56.656892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:44.821 [2024-11-18 20:40:56.656902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:44.821 [2024-11-18 20:40:56.658333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:44.821 [2024-11-18 20:40:56.658396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:44.821 [2024-11-18 20:40:56.658465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:44.821 [2024-11-18 20:40:56.658468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:44.821 [2024-11-18 20:40:56.742763] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:44.821 [2024-11-18 20:40:56.742962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:44.821 [2024-11-18 20:40:56.743241] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:44.821 [2024-11-18 20:40:56.743835] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:44.821 [2024-11-18 20:40:56.744073] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:44.821 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.081 [2024-11-18 20:40:56.843106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.081 Malloc0 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.081 [2024-11-18 20:40:56.911288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.081 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:45.081 test case1: single bdev can't be used in multiple subsystems 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.082 [2024-11-18 20:40:56.935059] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:45.082 [2024-11-18 20:40:56.935089] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:45.082 [2024-11-18 20:40:56.935104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.082 request: 00:39:45.082 { 00:39:45.082 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:45.082 "namespace": { 00:39:45.082 "bdev_name": "Malloc0", 00:39:45.082 "no_auto_visible": false 00:39:45.082 }, 00:39:45.082 "method": "nvmf_subsystem_add_ns", 00:39:45.082 "req_id": 1 00:39:45.082 } 00:39:45.082 Got JSON-RPC error response 00:39:45.082 response: 00:39:45.082 { 00:39:45.082 "code": -32602, 00:39:45.082 "message": "Invalid parameters" 00:39:45.082 } 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:45.082 Adding namespace failed - expected result. 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:45.082 test case2: host connect to nvmf target in multiple paths 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.082 [2024-11-18 20:40:56.943154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.082 20:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:45.342 20:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:45.602 20:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:45.602 20:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:39:45.602 20:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:45.602 20:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:45.602 20:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:47.554 20:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:47.554 20:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:47.554 20:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:47.554 20:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:47.554 20:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:47.554 20:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:47.554 20:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:47.554 [global] 00:39:47.554 thread=1 00:39:47.554 invalidate=1 00:39:47.554 rw=write 00:39:47.554 time_based=1 00:39:47.554 runtime=1 00:39:47.554 ioengine=libaio 00:39:47.554 direct=1 00:39:47.554 bs=4096 00:39:47.554 iodepth=1 00:39:47.554 norandommap=0 00:39:47.554 numjobs=1 00:39:47.554 00:39:47.554 verify_dump=1 00:39:47.554 verify_backlog=512 00:39:47.554 verify_state_save=0 00:39:47.554 do_verify=1 00:39:47.554 verify=crc32c-intel 00:39:47.554 [job0] 00:39:47.554 filename=/dev/nvme0n1 00:39:47.554 Could not set queue depth (nvme0n1) 00:39:47.813 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.813 fio-3.35 00:39:47.813 Starting 1 thread 00:39:49.188 00:39:49.188 job0: (groupid=0, jobs=1): err= 0: pid=446144: Mon Nov 18 20:41:00 2024 00:39:49.188 read: IOPS=357, BW=1429KiB/s (1464kB/s)(1468KiB/1027msec) 00:39:49.188 slat (nsec): min=5460, max=39311, avg=8335.71, stdev=4492.04 00:39:49.188 clat (usec): min=198, max=42013, avg=2378.92, stdev=9131.84 00:39:49.188 lat (usec): min=206, max=42033, avg=2387.26, stdev=9134.22 00:39:49.188 clat percentiles (usec): 00:39:49.188 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:39:49.188 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 245], 60.00th=[ 269], 00:39:49.188 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[40633], 00:39:49.188 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:49.188 | 99.99th=[42206] 00:39:49.188 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:39:49.188 slat (usec): min=8, max=28955, avg=75.68, stdev=1278.83 00:39:49.188 clat (usec): min=150, max=436, avg=210.47, stdev=33.59 00:39:49.188 lat (usec): min=160, max=29252, avg=286.15, stdev=1283.12 00:39:49.188 clat percentiles (usec): 00:39:49.188 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 174], 20.00th=[ 182], 00:39:49.188 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 212], 60.00th=[ 225], 00:39:49.188 | 70.00th=[ 231], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 258], 00:39:49.188 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 437], 99.95th=[ 437], 00:39:49.188 | 99.99th=[ 437] 00:39:49.188 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:49.188 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:49.188 lat (usec) : 250=72.24%, 500=25.48%, 750=0.11% 00:39:49.188 lat (msec) : 50=2.16% 00:39:49.188 cpu : usr=0.58%, sys=1.95%, ctx=881, majf=0, minf=1 00:39:49.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:49.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:49.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:49.188 issued rwts: total=367,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:49.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:49.188 00:39:49.188 Run status group 0 (all jobs): 00:39:49.188 READ: bw=1429KiB/s (1464kB/s), 1429KiB/s-1429KiB/s (1464kB/s-1464kB/s), io=1468KiB (1503kB), run=1027-1027msec 00:39:49.188 WRITE: bw=1994KiB/s (2042kB/s), 1994KiB/s-1994KiB/s (2042kB/s-2042kB/s), io=2048KiB (2097kB), run=1027-1027msec 00:39:49.188 00:39:49.188 Disk stats (read/write): 00:39:49.188 nvme0n1: ios=415/512, merge=0/0, ticks=1068/97, in_queue=1165, util=98.70% 00:39:49.188 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:49.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:49.188 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:49.188 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:39:49.188 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:49.188 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:49.188 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:49.188 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:49.188 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:39:49.189 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:49.189 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:49.189 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:49.189 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:49.189 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:49.189 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:49.189 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:49.189 20:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:49.189 rmmod nvme_tcp 00:39:49.189 rmmod nvme_fabrics 00:39:49.189 rmmod nvme_keyring 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 445643 ']' 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 445643 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 445643 ']' 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 445643 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445643 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445643' 00:39:49.189 killing process with pid 445643 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 445643 00:39:49.189 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 445643 00:39:49.447 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:49.447 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:49.447 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:49.447 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:49.447 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:39:49.447 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:49.447 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:39:49.447 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:49.447 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:49.447 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:49.447 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:49.447 20:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:51.368 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:51.368 00:39:51.368 real 0m9.320s 00:39:51.368 user 0m17.467s 00:39:51.368 sys 0m3.264s 00:39:51.368 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:51.368 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:51.368 ************************************ 00:39:51.368 END TEST nvmf_nmic 00:39:51.368 ************************************ 00:39:51.368 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:51.368 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:51.368 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:51.368 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:51.368 ************************************ 00:39:51.368 START TEST nvmf_fio_target 00:39:51.368 ************************************ 00:39:51.368 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:51.628 * Looking for test storage... 00:39:51.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:51.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.628 --rc genhtml_branch_coverage=1 00:39:51.628 --rc genhtml_function_coverage=1 00:39:51.628 --rc genhtml_legend=1 00:39:51.628 --rc geninfo_all_blocks=1 00:39:51.628 --rc geninfo_unexecuted_blocks=1 00:39:51.628 00:39:51.628 ' 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:51.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.628 --rc genhtml_branch_coverage=1 00:39:51.628 --rc genhtml_function_coverage=1 00:39:51.628 --rc genhtml_legend=1 00:39:51.628 --rc geninfo_all_blocks=1 00:39:51.628 --rc geninfo_unexecuted_blocks=1 00:39:51.628 00:39:51.628 ' 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:51.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.628 --rc genhtml_branch_coverage=1 00:39:51.628 --rc genhtml_function_coverage=1 00:39:51.628 --rc genhtml_legend=1 00:39:51.628 --rc geninfo_all_blocks=1 00:39:51.628 --rc geninfo_unexecuted_blocks=1 00:39:51.628 00:39:51.628 ' 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:51.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.628 --rc genhtml_branch_coverage=1 00:39:51.628 --rc genhtml_function_coverage=1 00:39:51.628 --rc genhtml_legend=1 00:39:51.628 --rc geninfo_all_blocks=1 00:39:51.628 --rc geninfo_unexecuted_blocks=1 00:39:51.628 00:39:51.628 ' 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.628 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:51.629 20:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:53.538 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:53.538 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:53.538 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:53.538 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:53.538 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:53.539 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:53.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:53.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:39:53.798 00:39:53.798 --- 10.0.0.2 ping statistics --- 00:39:53.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.798 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:53.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:53.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:39:53.798 00:39:53.798 --- 10.0.0.1 ping statistics --- 00:39:53.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.798 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=448219 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 448219 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 448219 ']' 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:53.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:53.798 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:53.798 [2024-11-18 20:41:05.722329] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:53.798 [2024-11-18 20:41:05.723384] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:39:53.798 [2024-11-18 20:41:05.723457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:53.799 [2024-11-18 20:41:05.796443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:54.057 [2024-11-18 20:41:05.843173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:54.057 [2024-11-18 20:41:05.843243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:54.057 [2024-11-18 20:41:05.843270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:54.057 [2024-11-18 20:41:05.843282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:54.057 [2024-11-18 20:41:05.843291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:54.057 [2024-11-18 20:41:05.844946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:54.057 [2024-11-18 20:41:05.845076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:54.057 [2024-11-18 20:41:05.845131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:54.057 [2024-11-18 20:41:05.845134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.057 [2024-11-18 20:41:05.931534] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:54.057 [2024-11-18 20:41:05.931759] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:54.057 [2024-11-18 20:41:05.932047] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:54.057 [2024-11-18 20:41:05.932657] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:54.057 [2024-11-18 20:41:05.932886] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:54.057 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:54.057 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:39:54.057 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:54.057 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:54.057 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:54.057 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:54.057 20:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:54.316 [2024-11-18 20:41:06.241800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:54.316 20:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:54.883 20:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:54.883 20:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:55.141 20:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:55.141 20:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:55.399 20:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:55.399 20:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:55.657 20:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:55.657 20:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:55.915 20:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:56.173 20:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:56.173 20:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:56.433 20:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:56.433 20:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:57.003 20:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:57.003 20:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:57.003 20:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:57.263 20:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:57.263 20:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:57.828 20:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:57.828 20:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:57.828 20:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:58.087 [2024-11-18 20:41:10.050049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:58.087 20:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:58.345 20:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:58.913 20:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:58.914 20:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:58.914 20:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:39:58.914 20:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:58.914 20:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:39:58.914 20:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:39:58.914 20:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:40:00.818 20:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:00.818 20:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:00.818 20:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:00.818 20:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:40:00.818 20:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:00.818 20:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:40:00.818 20:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:01.076 [global] 00:40:01.076 thread=1 00:40:01.076 invalidate=1 00:40:01.076 rw=write 00:40:01.076 time_based=1 00:40:01.076 runtime=1 00:40:01.076 ioengine=libaio 00:40:01.076 direct=1 00:40:01.076 bs=4096 00:40:01.076 iodepth=1 00:40:01.076 norandommap=0 00:40:01.076 numjobs=1 00:40:01.076 00:40:01.076 verify_dump=1 00:40:01.076 verify_backlog=512 00:40:01.076 verify_state_save=0 00:40:01.076 do_verify=1 00:40:01.076 verify=crc32c-intel 00:40:01.076 [job0] 00:40:01.076 filename=/dev/nvme0n1 00:40:01.076 [job1] 00:40:01.076 filename=/dev/nvme0n2 00:40:01.076 [job2] 00:40:01.076 filename=/dev/nvme0n3 00:40:01.076 [job3] 00:40:01.076 filename=/dev/nvme0n4 00:40:01.076 Could not set queue depth (nvme0n1) 00:40:01.076 Could not set queue depth (nvme0n2) 00:40:01.076 Could not set queue depth (nvme0n3) 00:40:01.076 Could not set queue depth (nvme0n4) 00:40:01.076 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:01.076 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:01.076 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:01.076 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:01.076 fio-3.35 00:40:01.076 Starting 4 threads 00:40:02.451 00:40:02.451 job0: (groupid=0, jobs=1): err= 0: pid=449277: Mon Nov 18 20:41:14 2024 00:40:02.451 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:02.451 slat (nsec): min=5532, max=42780, avg=9458.56, stdev=7303.67 00:40:02.451 clat (usec): min=203, max=41237, avg=251.28, stdev=906.46 00:40:02.451 lat (usec): min=209, max=41254, avg=260.74, stdev=906.67 00:40:02.451 clat percentiles (usec): 00:40:02.451 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 219], 00:40:02.451 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:40:02.451 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 260], 00:40:02.451 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 297], 99.95th=[ 1139], 00:40:02.451 | 99.99th=[41157] 00:40:02.451 write: IOPS=2335, BW=9343KiB/s (9567kB/s)(9352KiB/1001msec); 0 zone resets 00:40:02.451 slat (nsec): min=6909, max=39888, avg=11434.65, stdev=7544.12 00:40:02.451 clat (usec): min=141, max=1314, avg=182.53, stdev=46.36 00:40:02.451 lat (usec): min=149, max=1326, avg=193.96, stdev=47.82 00:40:02.451 clat percentiles (usec): 00:40:02.451 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:40:02.451 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 176], 00:40:02.451 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 241], 95.00th=[ 265], 00:40:02.451 | 99.00th=[ 281], 99.50th=[ 310], 99.90th=[ 775], 99.95th=[ 889], 00:40:02.451 | 99.99th=[ 1319] 00:40:02.451 bw ( KiB/s): min= 8192, max= 8192, per=37.04%, avg=8192.00, stdev= 0.00, samples=1 00:40:02.451 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:40:02.451 lat (usec) : 250=90.24%, 500=9.62%, 750=0.02%, 1000=0.05% 00:40:02.451 lat (msec) : 2=0.05%, 50=0.02% 00:40:02.451 cpu : usr=3.60%, sys=5.60%, ctx=4386, majf=0, minf=1 00:40:02.451 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:02.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.451 issued rwts: total=2048,2338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:02.451 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:02.451 job1: (groupid=0, jobs=1): err= 0: pid=449278: Mon Nov 18 20:41:14 2024 00:40:02.451 read: IOPS=22, BW=89.5KiB/s (91.6kB/s)(92.0KiB/1028msec) 00:40:02.451 slat (nsec): min=7101, max=32980, avg=13995.61, stdev=4502.41 00:40:02.451 clat (usec): min=282, max=41038, avg=39185.14, stdev=8480.88 00:40:02.451 lat (usec): min=295, max=41050, avg=39199.14, stdev=8481.06 00:40:02.451 clat percentiles (usec): 00:40:02.451 | 1.00th=[ 285], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:02.451 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:02.451 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:02.451 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:02.451 | 99.99th=[41157] 00:40:02.451 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:40:02.451 slat (nsec): min=6911, max=43125, avg=13696.20, stdev=7744.90 00:40:02.451 clat (usec): min=165, max=1535, avg=229.19, stdev=62.73 00:40:02.451 lat (usec): min=180, max=1543, avg=242.88, stdev=63.95 00:40:02.451 clat percentiles (usec): 00:40:02.451 | 1.00th=[ 178], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:40:02.451 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:40:02.451 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 260], 95.00th=[ 273], 00:40:02.451 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 1532], 99.95th=[ 1532], 00:40:02.451 | 99.99th=[ 1532] 00:40:02.451 bw ( KiB/s): min= 4096, max= 4096, per=18.52%, avg=4096.00, stdev= 0.00, samples=1 00:40:02.451 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:02.451 lat (usec) : 250=79.81%, 500=15.89% 00:40:02.451 lat (msec) : 2=0.19%, 50=4.11% 00:40:02.451 cpu : usr=0.88%, sys=0.49%, ctx=535, majf=0, minf=1 00:40:02.451 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:02.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.451 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:02.451 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:02.451 job2: (groupid=0, jobs=1): err= 0: pid=449279: Mon Nov 18 20:41:14 2024 00:40:02.451 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:02.451 slat (nsec): min=5652, max=35641, avg=8439.21, stdev=4461.51 00:40:02.451 clat (usec): min=204, max=795, avg=254.35, stdev=45.34 00:40:02.451 lat (usec): min=211, max=816, avg=262.79, stdev=46.19 00:40:02.451 clat percentiles (usec): 00:40:02.451 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 229], 00:40:02.451 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:40:02.451 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:40:02.451 | 99.00th=[ 519], 99.50th=[ 570], 99.90th=[ 594], 99.95th=[ 594], 00:40:02.451 | 99.99th=[ 799] 00:40:02.451 write: IOPS=2319, BW=9279KiB/s (9501kB/s)(9288KiB/1001msec); 0 zone resets 00:40:02.451 slat (nsec): min=7447, max=55226, avg=11270.20, stdev=5999.30 00:40:02.451 clat (usec): min=141, max=2546, avg=182.38, stdev=57.13 00:40:02.451 lat (usec): min=149, max=2556, avg=193.65, stdev=58.99 00:40:02.451 clat percentiles (usec): 00:40:02.451 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:40:02.451 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:40:02.451 | 70.00th=[ 188], 80.00th=[ 202], 90.00th=[ 227], 95.00th=[ 243], 00:40:02.451 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 367], 99.95th=[ 404], 00:40:02.451 | 99.99th=[ 2540] 00:40:02.451 bw ( KiB/s): min= 8192, max= 8192, per=37.04%, avg=8192.00, stdev= 0.00, samples=1 00:40:02.451 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:40:02.451 lat (usec) : 250=76.27%, 500=23.11%, 750=0.57%, 1000=0.02% 00:40:02.451 lat (msec) : 4=0.02% 00:40:02.451 cpu : usr=3.30%, sys=5.80%, ctx=4372, majf=0, minf=1 00:40:02.451 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:02.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.451 issued rwts: total=2048,2322,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:02.451 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:02.451 job3: (groupid=0, jobs=1): err= 0: pid=449280: Mon Nov 18 20:41:14 2024 00:40:02.451 read: IOPS=23, BW=94.8KiB/s (97.0kB/s)(96.0KiB/1013msec) 00:40:02.451 slat (nsec): min=6046, max=30255, avg=14645.54, stdev=3785.51 00:40:02.451 clat (usec): min=295, max=41079, avg=37563.34, stdev=11459.79 00:40:02.451 lat (usec): min=309, max=41094, avg=37577.98, stdev=11457.40 00:40:02.451 clat percentiles (usec): 00:40:02.451 | 1.00th=[ 297], 5.00th=[ 420], 10.00th=[40633], 20.00th=[40633], 00:40:02.451 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:02.451 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:02.451 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:02.451 | 99.99th=[41157] 00:40:02.451 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:40:02.451 slat (usec): min=6, max=948, avg=15.49, stdev=42.02 00:40:02.451 clat (usec): min=159, max=271, avg=198.12, stdev=20.90 00:40:02.451 lat (usec): min=168, max=1120, avg=213.61, stdev=45.75 00:40:02.451 clat percentiles (usec): 00:40:02.451 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:40:02.451 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:40:02.451 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 237], 00:40:02.451 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 273], 99.95th=[ 273], 00:40:02.451 | 99.99th=[ 273] 00:40:02.451 bw ( KiB/s): min= 4096, max= 4096, per=18.52%, avg=4096.00, stdev= 0.00, samples=1 00:40:02.451 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:02.452 lat (usec) : 250=92.16%, 500=3.73% 00:40:02.452 lat (msec) : 50=4.10% 00:40:02.452 cpu : usr=0.40%, sys=0.59%, ctx=538, majf=0, minf=1 00:40:02.452 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:02.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.452 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:02.452 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:02.452 00:40:02.452 Run status group 0 (all jobs): 00:40:02.452 READ: bw=15.7MiB/s (16.5MB/s), 89.5KiB/s-8184KiB/s (91.6kB/s-8380kB/s), io=16.2MiB (17.0MB), run=1001-1028msec 00:40:02.452 WRITE: bw=21.6MiB/s (22.6MB/s), 1992KiB/s-9343KiB/s (2040kB/s-9567kB/s), io=22.2MiB (23.3MB), run=1001-1028msec 00:40:02.452 00:40:02.452 Disk stats (read/write): 00:40:02.452 nvme0n1: ios=1669/2048, merge=0/0, ticks=420/370, in_queue=790, util=86.77% 00:40:02.452 nvme0n2: ios=24/512, merge=0/0, ticks=699/116, in_queue=815, util=86.67% 00:40:02.452 nvme0n3: ios=1688/2048, merge=0/0, ticks=1402/355, in_queue=1757, util=98.22% 00:40:02.452 nvme0n4: ios=76/512, merge=0/0, ticks=892/101, in_queue=993, util=98.10% 00:40:02.452 20:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:02.452 [global] 00:40:02.452 thread=1 00:40:02.452 invalidate=1 00:40:02.452 rw=randwrite 00:40:02.452 time_based=1 00:40:02.452 runtime=1 00:40:02.452 ioengine=libaio 00:40:02.452 direct=1 00:40:02.452 bs=4096 00:40:02.452 iodepth=1 00:40:02.452 norandommap=0 00:40:02.452 numjobs=1 00:40:02.452 00:40:02.452 verify_dump=1 00:40:02.452 verify_backlog=512 00:40:02.452 verify_state_save=0 00:40:02.452 do_verify=1 00:40:02.452 verify=crc32c-intel 00:40:02.452 [job0] 00:40:02.452 filename=/dev/nvme0n1 00:40:02.452 [job1] 00:40:02.452 filename=/dev/nvme0n2 00:40:02.452 [job2] 00:40:02.452 filename=/dev/nvme0n3 00:40:02.452 [job3] 00:40:02.452 filename=/dev/nvme0n4 00:40:02.452 Could not set queue depth (nvme0n1) 00:40:02.452 Could not set queue depth (nvme0n2) 00:40:02.452 Could not set queue depth (nvme0n3) 00:40:02.452 Could not set queue depth (nvme0n4) 00:40:02.710 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:02.710 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:02.710 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:02.710 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:02.710 fio-3.35 00:40:02.710 Starting 4 threads 00:40:04.089 00:40:04.089 job0: (groupid=0, jobs=1): err= 0: pid=449511: Mon Nov 18 20:41:15 2024 00:40:04.089 read: IOPS=20, BW=83.6KiB/s (85.6kB/s)(84.0KiB/1005msec) 00:40:04.089 slat (nsec): min=6047, max=33114, avg=18179.57, stdev=5632.25 00:40:04.089 clat (usec): min=25780, max=42047, avg=40899.01, stdev=3499.06 00:40:04.089 lat (usec): min=25799, max=42062, avg=40917.19, stdev=3499.41 00:40:04.089 clat percentiles (usec): 00:40:04.089 | 1.00th=[25822], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:40:04.089 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:40:04.089 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:04.089 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:04.089 | 99.99th=[42206] 00:40:04.089 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:40:04.089 slat (nsec): min=5538, max=43664, avg=9293.86, stdev=4801.63 00:40:04.089 clat (usec): min=172, max=493, avg=272.41, stdev=69.87 00:40:04.089 lat (usec): min=178, max=508, avg=281.70, stdev=71.51 00:40:04.089 clat percentiles (usec): 00:40:04.089 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 215], 20.00th=[ 227], 00:40:04.089 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 251], 00:40:04.089 | 70.00th=[ 269], 80.00th=[ 343], 90.00th=[ 388], 95.00th=[ 420], 00:40:04.089 | 99.00th=[ 461], 99.50th=[ 469], 99.90th=[ 494], 99.95th=[ 494], 00:40:04.089 | 99.99th=[ 494] 00:40:04.089 bw ( KiB/s): min= 4096, max= 4096, per=27.98%, avg=4096.00, stdev= 0.00, samples=1 00:40:04.089 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:04.089 lat (usec) : 250=57.79%, 500=38.27% 00:40:04.089 lat (msec) : 50=3.94% 00:40:04.089 cpu : usr=0.20%, sys=0.40%, ctx=534, majf=0, minf=1 00:40:04.089 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:04.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.089 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:04.089 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:04.089 job1: (groupid=0, jobs=1): err= 0: pid=449512: Mon Nov 18 20:41:15 2024 00:40:04.089 read: IOPS=27, BW=111KiB/s (114kB/s)(112KiB/1009msec) 00:40:04.089 slat (nsec): min=9145, max=59230, avg=21874.07, stdev=11181.85 00:40:04.089 clat (usec): min=298, max=41523, avg=30825.11, stdev=17884.96 00:40:04.089 lat (usec): min=313, max=41540, avg=30846.99, stdev=17880.45 00:40:04.089 clat percentiles (usec): 00:40:04.089 | 1.00th=[ 297], 5.00th=[ 347], 10.00th=[ 400], 20.00th=[ 449], 00:40:04.089 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:04.089 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:04.089 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:40:04.089 | 99.99th=[41681] 00:40:04.089 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:40:04.089 slat (nsec): min=6510, max=38996, avg=10119.54, stdev=4950.06 00:40:04.089 clat (usec): min=189, max=474, avg=270.38, stdev=63.32 00:40:04.089 lat (usec): min=197, max=494, avg=280.50, stdev=64.98 00:40:04.089 clat percentiles (usec): 00:40:04.089 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 227], 00:40:04.089 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 251], 00:40:04.089 | 70.00th=[ 277], 80.00th=[ 334], 90.00th=[ 383], 95.00th=[ 400], 00:40:04.089 | 99.00th=[ 433], 99.50th=[ 457], 99.90th=[ 474], 99.95th=[ 474], 00:40:04.089 | 99.99th=[ 474] 00:40:04.089 bw ( KiB/s): min= 4096, max= 4096, per=27.98%, avg=4096.00, stdev= 0.00, samples=1 00:40:04.089 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:04.089 lat (usec) : 250=55.56%, 500=40.37%, 750=0.19% 00:40:04.089 lat (msec) : 50=3.89% 00:40:04.089 cpu : usr=0.40%, sys=0.69%, ctx=540, majf=0, minf=1 00:40:04.089 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:04.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.089 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:04.089 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:04.089 job2: (groupid=0, jobs=1): err= 0: pid=449513: Mon Nov 18 20:41:15 2024 00:40:04.089 read: IOPS=1205, BW=4822KiB/s (4938kB/s)(4856KiB/1007msec) 00:40:04.089 slat (nsec): min=4827, max=59249, avg=11408.45, stdev=6633.92 00:40:04.089 clat (usec): min=185, max=41106, avg=536.10, stdev=3382.59 00:40:04.089 lat (usec): min=192, max=41152, avg=547.51, stdev=3383.69 00:40:04.089 clat percentiles (usec): 00:40:04.089 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 225], 00:40:04.090 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:40:04.090 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 322], 00:40:04.090 | 99.00th=[ 457], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:40:04.090 | 99.99th=[41157] 00:40:04.090 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:40:04.090 slat (nsec): min=6348, max=44165, avg=12561.68, stdev=5760.62 00:40:04.090 clat (usec): min=154, max=2865, avg=203.61, stdev=89.50 00:40:04.090 lat (usec): min=162, max=2881, avg=216.17, stdev=89.48 00:40:04.090 clat percentiles (usec): 00:40:04.090 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:40:04.090 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 188], 00:40:04.090 | 70.00th=[ 194], 80.00th=[ 206], 90.00th=[ 253], 95.00th=[ 375], 00:40:04.090 | 99.00th=[ 420], 99.50th=[ 453], 99.90th=[ 652], 99.95th=[ 2868], 00:40:04.090 | 99.99th=[ 2868] 00:40:04.090 bw ( KiB/s): min= 4552, max= 7736, per=41.97%, avg=6144.00, stdev=2251.43, samples=2 00:40:04.090 iops : min= 1138, max= 1934, avg=1536.00, stdev=562.86, samples=2 00:40:04.090 lat (usec) : 250=83.05%, 500=16.47%, 750=0.11% 00:40:04.090 lat (msec) : 4=0.04%, 50=0.33% 00:40:04.090 cpu : usr=2.58%, sys=2.68%, ctx=2750, majf=0, minf=2 00:40:04.090 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:04.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.090 issued rwts: total=1214,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:04.090 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:04.090 job3: (groupid=0, jobs=1): err= 0: pid=449514: Mon Nov 18 20:41:15 2024 00:40:04.090 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:40:04.090 slat (nsec): min=4489, max=36372, avg=9652.41, stdev=4532.75 00:40:04.090 clat (usec): min=198, max=41091, avg=710.50, stdev=4377.48 00:40:04.090 lat (usec): min=204, max=41097, avg=720.15, stdev=4378.30 00:40:04.090 clat percentiles (usec): 00:40:04.090 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:40:04.090 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:40:04.090 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 273], 95.00th=[ 297], 00:40:04.090 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:04.090 | 99.99th=[41157] 00:40:04.090 write: IOPS=1131, BW=4527KiB/s (4636kB/s)(4532KiB/1001msec); 0 zone resets 00:40:04.090 slat (nsec): min=5921, max=53154, avg=11158.92, stdev=6953.88 00:40:04.090 clat (usec): min=149, max=4020, avg=214.70, stdev=158.36 00:40:04.090 lat (usec): min=157, max=4028, avg=225.85, stdev=158.55 00:40:04.090 clat percentiles (usec): 00:40:04.090 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 182], 00:40:04.090 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:40:04.090 | 70.00th=[ 217], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 273], 00:40:04.090 | 99.00th=[ 338], 99.50th=[ 379], 99.90th=[ 3720], 99.95th=[ 4015], 00:40:04.090 | 99.99th=[ 4015] 00:40:04.090 bw ( KiB/s): min= 4648, max= 4648, per=31.75%, avg=4648.00, stdev= 0.00, samples=1 00:40:04.090 iops : min= 1162, max= 1162, avg=1162.00, stdev= 0.00, samples=1 00:40:04.090 lat (usec) : 250=84.84%, 500=14.51% 00:40:04.090 lat (msec) : 4=0.05%, 10=0.05%, 50=0.56% 00:40:04.090 cpu : usr=1.10%, sys=2.50%, ctx=2158, majf=0, minf=1 00:40:04.090 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:04.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.090 issued rwts: total=1024,1133,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:04.090 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:04.090 00:40:04.090 Run status group 0 (all jobs): 00:40:04.090 READ: bw=9066KiB/s (9284kB/s), 83.6KiB/s-4822KiB/s (85.6kB/s-4938kB/s), io=9148KiB (9368kB), run=1001-1009msec 00:40:04.090 WRITE: bw=14.3MiB/s (15.0MB/s), 2030KiB/s-6101KiB/s (2078kB/s-6248kB/s), io=14.4MiB (15.1MB), run=1001-1009msec 00:40:04.090 00:40:04.090 Disk stats (read/write): 00:40:04.090 nvme0n1: ios=66/512, merge=0/0, ticks=699/132, in_queue=831, util=87.27% 00:40:04.090 nvme0n2: ios=40/512, merge=0/0, ticks=706/134, in_queue=840, util=86.90% 00:40:04.090 nvme0n3: ios=1077/1256, merge=0/0, ticks=623/252, in_queue=875, util=91.75% 00:40:04.090 nvme0n4: ios=1034/1024, merge=0/0, ticks=908/213, in_queue=1121, util=95.37% 00:40:04.090 20:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:40:04.090 [global] 00:40:04.090 thread=1 00:40:04.090 invalidate=1 00:40:04.090 rw=write 00:40:04.090 time_based=1 00:40:04.090 runtime=1 00:40:04.090 ioengine=libaio 00:40:04.090 direct=1 00:40:04.090 bs=4096 00:40:04.090 iodepth=128 00:40:04.090 norandommap=0 00:40:04.090 numjobs=1 00:40:04.090 00:40:04.090 verify_dump=1 00:40:04.090 verify_backlog=512 00:40:04.090 verify_state_save=0 00:40:04.090 do_verify=1 00:40:04.090 verify=crc32c-intel 00:40:04.090 [job0] 00:40:04.090 filename=/dev/nvme0n1 00:40:04.090 [job1] 00:40:04.090 filename=/dev/nvme0n2 00:40:04.090 [job2] 00:40:04.090 filename=/dev/nvme0n3 00:40:04.090 [job3] 00:40:04.090 filename=/dev/nvme0n4 00:40:04.090 Could not set queue depth (nvme0n1) 00:40:04.090 Could not set queue depth (nvme0n2) 00:40:04.090 Could not set queue depth (nvme0n3) 00:40:04.090 Could not set queue depth (nvme0n4) 00:40:04.090 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:04.090 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:04.090 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:04.090 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:04.090 fio-3.35 00:40:04.090 Starting 4 threads 00:40:05.466 00:40:05.466 job0: (groupid=0, jobs=1): err= 0: pid=449738: Mon Nov 18 20:41:17 2024 00:40:05.466 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:40:05.466 slat (usec): min=2, max=13049, avg=94.94, stdev=606.98 00:40:05.466 clat (usec): min=6883, max=28152, avg=12264.35, stdev=2843.17 00:40:05.466 lat (usec): min=6890, max=28165, avg=12359.29, stdev=2866.39 00:40:05.466 clat percentiles (usec): 00:40:05.466 | 1.00th=[ 7767], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10159], 00:40:05.466 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:40:05.466 | 70.00th=[13042], 80.00th=[13960], 90.00th=[15270], 95.00th=[16450], 00:40:05.466 | 99.00th=[23987], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:40:05.466 | 99.99th=[28181] 00:40:05.466 write: IOPS=5451, BW=21.3MiB/s (22.3MB/s)(21.3MiB/1002msec); 0 zone resets 00:40:05.466 slat (usec): min=3, max=10049, avg=88.07, stdev=559.00 00:40:05.466 clat (usec): min=1414, max=23335, avg=11730.34, stdev=1921.79 00:40:05.466 lat (usec): min=1420, max=23347, avg=11818.41, stdev=1969.73 00:40:05.466 clat percentiles (usec): 00:40:05.466 | 1.00th=[ 6652], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[10552], 00:40:05.467 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[11863], 00:40:05.467 | 70.00th=[12125], 80.00th=[12518], 90.00th=[13566], 95.00th=[15270], 00:40:05.467 | 99.00th=[17957], 99.50th=[20317], 99.90th=[20579], 99.95th=[21103], 00:40:05.467 | 99.99th=[23462] 00:40:05.467 bw ( KiB/s): min=21256, max=21424, per=32.38%, avg=21340.00, stdev=118.79, samples=2 00:40:05.467 iops : min= 5314, max= 5356, avg=5335.00, stdev=29.70, samples=2 00:40:05.467 lat (msec) : 2=0.14%, 10=12.41%, 20=85.83%, 50=1.62% 00:40:05.467 cpu : usr=4.70%, sys=5.99%, ctx=449, majf=0, minf=1 00:40:05.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:40:05.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:05.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:05.467 issued rwts: total=5120,5462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:05.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:05.467 job1: (groupid=0, jobs=1): err= 0: pid=449739: Mon Nov 18 20:41:17 2024 00:40:05.467 read: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(10.0MiB/1017msec) 00:40:05.467 slat (usec): min=3, max=18117, avg=176.62, stdev=1428.76 00:40:05.467 clat (usec): min=14722, max=40770, avg=25106.47, stdev=6257.34 00:40:05.467 lat (usec): min=14727, max=48832, avg=25283.09, stdev=6364.89 00:40:05.467 clat percentiles (usec): 00:40:05.467 | 1.00th=[14877], 5.00th=[16581], 10.00th=[16909], 20.00th=[19530], 00:40:05.467 | 30.00th=[21103], 40.00th=[22414], 50.00th=[23200], 60.00th=[25297], 00:40:05.467 | 70.00th=[29492], 80.00th=[32375], 90.00th=[34866], 95.00th=[35390], 00:40:05.467 | 99.00th=[39584], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:40:05.467 | 99.99th=[40633] 00:40:05.467 write: IOPS=2671, BW=10.4MiB/s (10.9MB/s)(10.6MiB/1017msec); 0 zone resets 00:40:05.467 slat (usec): min=4, max=21602, avg=196.71, stdev=1624.23 00:40:05.467 clat (usec): min=8822, max=77996, avg=23643.45, stdev=9848.96 00:40:05.467 lat (usec): min=10269, max=78004, avg=23840.16, stdev=9953.02 00:40:05.467 clat percentiles (usec): 00:40:05.467 | 1.00th=[10421], 5.00th=[11469], 10.00th=[14484], 20.00th=[16909], 00:40:05.467 | 30.00th=[19792], 40.00th=[21627], 50.00th=[22676], 60.00th=[22938], 00:40:05.467 | 70.00th=[24249], 80.00th=[24511], 90.00th=[32900], 95.00th=[37487], 00:40:05.467 | 99.00th=[78119], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:40:05.467 | 99.99th=[78119] 00:40:05.467 bw ( KiB/s): min= 8728, max=11984, per=15.72%, avg=10356.00, stdev=2302.34, samples=2 00:40:05.467 iops : min= 2182, max= 2996, avg=2589.00, stdev=575.58, samples=2 00:40:05.467 lat (msec) : 10=0.02%, 20=25.87%, 50=73.11%, 100=1.00% 00:40:05.467 cpu : usr=2.07%, sys=3.44%, ctx=121, majf=0, minf=1 00:40:05.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:40:05.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:05.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:05.467 issued rwts: total=2560,2717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:05.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:05.467 job2: (groupid=0, jobs=1): err= 0: pid=449740: Mon Nov 18 20:41:17 2024 00:40:05.467 read: IOPS=2657, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1011msec) 00:40:05.467 slat (usec): min=3, max=22054, avg=172.79, stdev=1450.12 00:40:05.467 clat (usec): min=3656, max=51613, avg=25084.25, stdev=7008.06 00:40:05.467 lat (usec): min=10091, max=51627, avg=25257.05, stdev=7098.19 00:40:05.467 clat percentiles (usec): 00:40:05.467 | 1.00th=[10552], 5.00th=[14877], 10.00th=[15664], 20.00th=[19530], 00:40:05.467 | 30.00th=[21890], 40.00th=[23200], 50.00th=[24249], 60.00th=[25822], 00:40:05.467 | 70.00th=[28181], 80.00th=[31065], 90.00th=[33424], 95.00th=[36963], 00:40:05.467 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45876], 99.95th=[47449], 00:40:05.467 | 99.99th=[51643] 00:40:05.467 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:40:05.467 slat (usec): min=4, max=20727, avg=169.18, stdev=1515.15 00:40:05.467 clat (usec): min=7055, max=43404, avg=19621.18, stdev=4887.13 00:40:05.467 lat (usec): min=7062, max=43423, avg=19790.36, stdev=5081.04 00:40:05.467 clat percentiles (usec): 00:40:05.467 | 1.00th=[10945], 5.00th=[13435], 10.00th=[13960], 20.00th=[15270], 00:40:05.467 | 30.00th=[16450], 40.00th=[17695], 50.00th=[19268], 60.00th=[20841], 00:40:05.467 | 70.00th=[22938], 80.00th=[23462], 90.00th=[24511], 95.00th=[28181], 00:40:05.467 | 99.00th=[33424], 99.50th=[33424], 99.90th=[41157], 99.95th=[43254], 00:40:05.467 | 99.99th=[43254] 00:40:05.467 bw ( KiB/s): min=12280, max=12288, per=18.64%, avg=12284.00, stdev= 5.66, samples=2 00:40:05.467 iops : min= 3070, max= 3072, avg=3071.00, stdev= 1.41, samples=2 00:40:05.467 lat (msec) : 4=0.02%, 10=0.19%, 20=41.50%, 50=58.27%, 100=0.02% 00:40:05.467 cpu : usr=2.67%, sys=3.27%, ctx=116, majf=0, minf=1 00:40:05.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:40:05.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:05.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:05.467 issued rwts: total=2687,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:05.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:05.467 job3: (groupid=0, jobs=1): err= 0: pid=449741: Mon Nov 18 20:41:17 2024 00:40:05.467 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:40:05.467 slat (usec): min=2, max=11576, avg=94.79, stdev=843.56 00:40:05.467 clat (usec): min=1266, max=26219, avg=12478.65, stdev=3031.67 00:40:05.467 lat (usec): min=1646, max=28984, avg=12573.44, stdev=3116.49 00:40:05.467 clat percentiles (usec): 00:40:05.467 | 1.00th=[ 3785], 5.00th=[ 8979], 10.00th=[10290], 20.00th=[11076], 00:40:05.467 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:40:05.467 | 70.00th=[12256], 80.00th=[13435], 90.00th=[16581], 95.00th=[19530], 00:40:05.467 | 99.00th=[22414], 99.50th=[22938], 99.90th=[23462], 99.95th=[23462], 00:40:05.467 | 99.99th=[26346] 00:40:05.467 write: IOPS=5470, BW=21.4MiB/s (22.4MB/s)(21.5MiB/1006msec); 0 zone resets 00:40:05.467 slat (usec): min=3, max=10240, avg=78.02, stdev=550.89 00:40:05.467 clat (usec): min=846, max=23115, avg=11601.96, stdev=2698.14 00:40:05.467 lat (usec): min=853, max=23120, avg=11679.98, stdev=2735.96 00:40:05.467 clat percentiles (usec): 00:40:05.467 | 1.00th=[ 4113], 5.00th=[ 7177], 10.00th=[ 8291], 20.00th=[ 9372], 00:40:05.467 | 30.00th=[10421], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:40:05.467 | 70.00th=[12780], 80.00th=[13042], 90.00th=[14484], 95.00th=[16581], 00:40:05.467 | 99.00th=[18482], 99.50th=[19268], 99.90th=[21365], 99.95th=[22938], 00:40:05.467 | 99.99th=[23200] 00:40:05.467 bw ( KiB/s): min=20944, max=22064, per=32.63%, avg=21504.00, stdev=791.96, samples=2 00:40:05.467 iops : min= 5236, max= 5516, avg=5376.00, stdev=197.99, samples=2 00:40:05.467 lat (usec) : 1000=0.07% 00:40:05.467 lat (msec) : 2=0.09%, 4=0.80%, 10=15.93%, 20=80.63%, 50=2.49% 00:40:05.467 cpu : usr=4.48%, sys=4.08%, ctx=463, majf=0, minf=1 00:40:05.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:40:05.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:05.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:05.467 issued rwts: total=5120,5503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:05.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:05.467 00:40:05.467 Run status group 0 (all jobs): 00:40:05.467 READ: bw=59.5MiB/s (62.4MB/s), 9.83MiB/s-20.0MiB/s (10.3MB/s-20.9MB/s), io=60.5MiB (63.4MB), run=1002-1017msec 00:40:05.467 WRITE: bw=64.4MiB/s (67.5MB/s), 10.4MiB/s-21.4MiB/s (10.9MB/s-22.4MB/s), io=65.4MiB (68.6MB), run=1002-1017msec 00:40:05.467 00:40:05.467 Disk stats (read/write): 00:40:05.467 nvme0n1: ios=4393/4608, merge=0/0, ticks=26286/25304, in_queue=51590, util=86.87% 00:40:05.467 nvme0n2: ios=2074/2479, merge=0/0, ticks=52461/53959, in_queue=106420, util=97.97% 00:40:05.467 nvme0n3: ios=2133/2560, merge=0/0, ticks=54231/50919, in_queue=105150, util=97.71% 00:40:05.467 nvme0n4: ios=4342/4608, merge=0/0, ticks=51721/52166, in_queue=103887, util=89.29% 00:40:05.467 20:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:05.467 [global] 00:40:05.467 thread=1 00:40:05.467 invalidate=1 00:40:05.467 rw=randwrite 00:40:05.467 time_based=1 00:40:05.467 runtime=1 00:40:05.467 ioengine=libaio 00:40:05.467 direct=1 00:40:05.467 bs=4096 00:40:05.467 iodepth=128 00:40:05.467 norandommap=0 00:40:05.467 numjobs=1 00:40:05.467 00:40:05.467 verify_dump=1 00:40:05.467 verify_backlog=512 00:40:05.467 verify_state_save=0 00:40:05.467 do_verify=1 00:40:05.467 verify=crc32c-intel 00:40:05.467 [job0] 00:40:05.467 filename=/dev/nvme0n1 00:40:05.467 [job1] 00:40:05.467 filename=/dev/nvme0n2 00:40:05.467 [job2] 00:40:05.467 filename=/dev/nvme0n3 00:40:05.467 [job3] 00:40:05.467 filename=/dev/nvme0n4 00:40:05.467 Could not set queue depth (nvme0n1) 00:40:05.467 Could not set queue depth (nvme0n2) 00:40:05.467 Could not set queue depth (nvme0n3) 00:40:05.467 Could not set queue depth (nvme0n4) 00:40:05.467 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:05.467 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:05.467 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:05.467 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:05.467 fio-3.35 00:40:05.467 Starting 4 threads 00:40:06.846 00:40:06.846 job0: (groupid=0, jobs=1): err= 0: pid=450091: Mon Nov 18 20:41:18 2024 00:40:06.846 read: IOPS=4777, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1003msec) 00:40:06.846 slat (usec): min=2, max=12366, avg=91.97, stdev=601.53 00:40:06.846 clat (usec): min=479, max=27489, avg=12223.38, stdev=3145.59 00:40:06.846 lat (usec): min=2209, max=27493, avg=12315.36, stdev=3183.07 00:40:06.846 clat percentiles (usec): 00:40:06.846 | 1.00th=[ 4555], 5.00th=[ 8225], 10.00th=[ 9241], 20.00th=[10290], 00:40:06.846 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11994], 60.00th=[12780], 00:40:06.846 | 70.00th=[13173], 80.00th=[13698], 90.00th=[15533], 95.00th=[18220], 00:40:06.846 | 99.00th=[23987], 99.50th=[26084], 99.90th=[27395], 99.95th=[27395], 00:40:06.846 | 99.99th=[27395] 00:40:06.846 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:40:06.846 slat (usec): min=3, max=18992, avg=99.41, stdev=722.03 00:40:06.846 clat (usec): min=850, max=48590, avg=13333.23, stdev=6127.90 00:40:06.846 lat (usec): min=862, max=48628, avg=13432.64, stdev=6158.38 00:40:06.846 clat percentiles (usec): 00:40:06.846 | 1.00th=[ 2802], 5.00th=[ 6849], 10.00th=[ 9634], 20.00th=[10421], 00:40:06.846 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[12649], 00:40:06.846 | 70.00th=[13435], 80.00th=[13829], 90.00th=[19530], 95.00th=[25035], 00:40:06.846 | 99.00th=[39060], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:40:06.846 | 99.99th=[48497] 00:40:06.846 bw ( KiB/s): min=20480, max=20480, per=29.01%, avg=20480.00, stdev= 0.00, samples=2 00:40:06.846 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:40:06.846 lat (usec) : 500=0.01%, 1000=0.07% 00:40:06.846 lat (msec) : 2=0.09%, 4=0.82%, 10=12.91%, 20=80.22%, 50=5.88% 00:40:06.847 cpu : usr=4.09%, sys=5.89%, ctx=455, majf=0, minf=2 00:40:06.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:40:06.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:06.847 issued rwts: total=4792,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:06.847 job1: (groupid=0, jobs=1): err= 0: pid=450092: Mon Nov 18 20:41:18 2024 00:40:06.847 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:40:06.847 slat (usec): min=2, max=11940, avg=111.45, stdev=679.81 00:40:06.847 clat (usec): min=6654, max=49666, avg=14211.55, stdev=4696.13 00:40:06.847 lat (usec): min=6658, max=49683, avg=14323.00, stdev=4744.23 00:40:06.847 clat percentiles (usec): 00:40:06.847 | 1.00th=[ 8225], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[11731], 00:40:06.847 | 30.00th=[11994], 40.00th=[12125], 50.00th=[13173], 60.00th=[13698], 00:40:06.847 | 70.00th=[14615], 80.00th=[16188], 90.00th=[19006], 95.00th=[20579], 00:40:06.847 | 99.00th=[37487], 99.50th=[43254], 99.90th=[49546], 99.95th=[49546], 00:40:06.847 | 99.99th=[49546] 00:40:06.847 write: IOPS=4390, BW=17.1MiB/s (18.0MB/s)(17.2MiB/1002msec); 0 zone resets 00:40:06.847 slat (usec): min=3, max=42894, avg=114.14, stdev=878.25 00:40:06.847 clat (usec): min=1906, max=57703, avg=15595.35, stdev=9526.75 00:40:06.847 lat (usec): min=1910, max=57729, avg=15709.49, stdev=9571.67 00:40:06.847 clat percentiles (usec): 00:40:06.847 | 1.00th=[ 5473], 5.00th=[10159], 10.00th=[10421], 20.00th=[11207], 00:40:06.847 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12780], 60.00th=[13304], 00:40:06.847 | 70.00th=[13829], 80.00th=[15926], 90.00th=[22414], 95.00th=[41157], 00:40:06.847 | 99.00th=[56361], 99.50th=[56361], 99.90th=[57410], 99.95th=[57934], 00:40:06.847 | 99.99th=[57934] 00:40:06.847 bw ( KiB/s): min=16304, max=17872, per=24.21%, avg=17088.00, stdev=1108.74, samples=2 00:40:06.847 iops : min= 4076, max= 4468, avg=4272.00, stdev=277.19, samples=2 00:40:06.847 lat (msec) : 2=0.20%, 4=0.01%, 10=4.53%, 20=84.50%, 50=8.72% 00:40:06.847 lat (msec) : 100=2.04% 00:40:06.847 cpu : usr=5.79%, sys=7.39%, ctx=367, majf=0, minf=2 00:40:06.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:40:06.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:06.847 issued rwts: total=4096,4399,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:06.847 job2: (groupid=0, jobs=1): err= 0: pid=450093: Mon Nov 18 20:41:18 2024 00:40:06.847 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:40:06.847 slat (usec): min=2, max=14769, avg=114.26, stdev=766.01 00:40:06.847 clat (usec): min=6253, max=29371, avg=15347.30, stdev=3288.30 00:40:06.847 lat (usec): min=6263, max=29381, avg=15461.56, stdev=3332.39 00:40:06.847 clat percentiles (usec): 00:40:06.847 | 1.00th=[ 8717], 5.00th=[10683], 10.00th=[11338], 20.00th=[12780], 00:40:06.847 | 30.00th=[13566], 40.00th=[14222], 50.00th=[15139], 60.00th=[15795], 00:40:06.847 | 70.00th=[16581], 80.00th=[17433], 90.00th=[19530], 95.00th=[21103], 00:40:06.847 | 99.00th=[25297], 99.50th=[27919], 99.90th=[28705], 99.95th=[29230], 00:40:06.847 | 99.99th=[29492] 00:40:06.847 write: IOPS=4259, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1002msec); 0 zone resets 00:40:06.847 slat (usec): min=3, max=14524, avg=106.10, stdev=702.17 00:40:06.847 clat (usec): min=290, max=66854, avg=14794.39, stdev=4154.49 00:40:06.847 lat (usec): min=3995, max=66877, avg=14900.49, stdev=4190.95 00:40:06.847 clat percentiles (usec): 00:40:06.847 | 1.00th=[ 5604], 5.00th=[ 8717], 10.00th=[10552], 20.00th=[12518], 00:40:06.847 | 30.00th=[13566], 40.00th=[14484], 50.00th=[14877], 60.00th=[15270], 00:40:06.847 | 70.00th=[15795], 80.00th=[16188], 90.00th=[17171], 95.00th=[20579], 00:40:06.847 | 99.00th=[26346], 99.50th=[32375], 99.90th=[57934], 99.95th=[59507], 00:40:06.847 | 99.99th=[66847] 00:40:06.847 bw ( KiB/s): min=16384, max=16744, per=23.47%, avg=16564.00, stdev=254.56, samples=2 00:40:06.847 iops : min= 4096, max= 4186, avg=4141.00, stdev=63.64, samples=2 00:40:06.847 lat (usec) : 500=0.01% 00:40:06.847 lat (msec) : 4=0.01%, 10=5.82%, 20=86.84%, 50=7.23%, 100=0.08% 00:40:06.847 cpu : usr=3.70%, sys=5.89%, ctx=353, majf=0, minf=1 00:40:06.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:40:06.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:06.847 issued rwts: total=4096,4268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:06.847 job3: (groupid=0, jobs=1): err= 0: pid=450094: Mon Nov 18 20:41:18 2024 00:40:06.847 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:40:06.847 slat (usec): min=2, max=21993, avg=136.54, stdev=902.40 00:40:06.847 clat (usec): min=7196, max=88146, avg=17452.36, stdev=8699.66 00:40:06.847 lat (usec): min=7207, max=88155, avg=17588.90, stdev=8745.72 00:40:06.847 clat percentiles (usec): 00:40:06.847 | 1.00th=[ 9110], 5.00th=[11338], 10.00th=[12125], 20.00th=[12911], 00:40:06.847 | 30.00th=[13698], 40.00th=[15008], 50.00th=[15795], 60.00th=[16057], 00:40:06.847 | 70.00th=[16581], 80.00th=[17957], 90.00th=[21365], 95.00th=[38011], 00:40:06.847 | 99.00th=[59507], 99.50th=[61080], 99.90th=[88605], 99.95th=[88605], 00:40:06.847 | 99.99th=[88605] 00:40:06.847 write: IOPS=3914, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1004msec); 0 zone resets 00:40:06.847 slat (usec): min=3, max=26028, avg=123.15, stdev=789.86 00:40:06.847 clat (usec): min=3221, max=53223, avg=16312.57, stdev=5672.32 00:40:06.847 lat (usec): min=4079, max=53235, avg=16435.72, stdev=5711.48 00:40:06.847 clat percentiles (usec): 00:40:06.847 | 1.00th=[ 7963], 5.00th=[12518], 10.00th=[13435], 20.00th=[13960], 00:40:06.847 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:40:06.847 | 70.00th=[15795], 80.00th=[17957], 90.00th=[19006], 95.00th=[23462], 00:40:06.847 | 99.00th=[47973], 99.50th=[47973], 99.90th=[53216], 99.95th=[53216], 00:40:06.847 | 99.99th=[53216] 00:40:06.847 bw ( KiB/s): min=13752, max=16672, per=21.55%, avg=15212.00, stdev=2064.75, samples=2 00:40:06.847 iops : min= 3438, max= 4168, avg=3803.00, stdev=516.19, samples=2 00:40:06.847 lat (msec) : 4=0.01%, 10=1.73%, 20=88.61%, 50=8.42%, 100=1.22% 00:40:06.847 cpu : usr=2.89%, sys=5.48%, ctx=427, majf=0, minf=1 00:40:06.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:40:06.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:06.847 issued rwts: total=3584,3930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:06.847 00:40:06.847 Run status group 0 (all jobs): 00:40:06.847 READ: bw=64.5MiB/s (67.6MB/s), 13.9MiB/s-18.7MiB/s (14.6MB/s-19.6MB/s), io=64.7MiB (67.9MB), run=1002-1004msec 00:40:06.847 WRITE: bw=68.9MiB/s (72.3MB/s), 15.3MiB/s-19.9MiB/s (16.0MB/s-20.9MB/s), io=69.2MiB (72.6MB), run=1002-1004msec 00:40:06.847 00:40:06.847 Disk stats (read/write): 00:40:06.847 nvme0n1: ios=4145/4359, merge=0/0, ticks=22594/23214, in_queue=45808, util=85.77% 00:40:06.847 nvme0n2: ios=3604/3719, merge=0/0, ticks=22635/22522, in_queue=45157, util=89.54% 00:40:06.847 nvme0n3: ios=3416/3584, merge=0/0, ticks=35420/37165, in_queue=72585, util=94.79% 00:40:06.847 nvme0n4: ios=3231/3584, merge=0/0, ticks=20508/24756, in_queue=45264, util=94.65% 00:40:06.847 20:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:06.847 20:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=450230 00:40:06.847 20:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:06.847 20:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:06.847 [global] 00:40:06.847 thread=1 00:40:06.847 invalidate=1 00:40:06.847 rw=read 00:40:06.847 time_based=1 00:40:06.847 runtime=10 00:40:06.847 ioengine=libaio 00:40:06.847 direct=1 00:40:06.847 bs=4096 00:40:06.847 iodepth=1 00:40:06.847 norandommap=1 00:40:06.847 numjobs=1 00:40:06.847 00:40:06.847 [job0] 00:40:06.847 filename=/dev/nvme0n1 00:40:06.847 [job1] 00:40:06.847 filename=/dev/nvme0n2 00:40:06.847 [job2] 00:40:06.847 filename=/dev/nvme0n3 00:40:06.847 [job3] 00:40:06.847 filename=/dev/nvme0n4 00:40:06.847 Could not set queue depth (nvme0n1) 00:40:06.847 Could not set queue depth (nvme0n2) 00:40:06.847 Could not set queue depth (nvme0n3) 00:40:06.847 Could not set queue depth (nvme0n4) 00:40:07.106 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:07.106 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:07.106 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:07.106 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:07.106 fio-3.35 00:40:07.106 Starting 4 threads 00:40:09.651 20:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:10.219 20:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:10.219 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=4169728, buflen=4096 00:40:10.219 fio: pid=450321, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:10.477 20:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:10.477 20:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:10.477 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=55652352, buflen=4096 00:40:10.477 fio: pid=450320, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:10.735 20:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:10.735 20:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:10.735 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=15687680, buflen=4096 00:40:10.735 fio: pid=450318, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:10.994 20:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:10.994 20:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:10.994 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4890624, buflen=4096 00:40:10.994 fio: pid=450319, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:10.994 00:40:10.994 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=450318: Mon Nov 18 20:41:22 2024 00:40:10.994 read: IOPS=1088, BW=4351KiB/s (4455kB/s)(15.0MiB/3521msec) 00:40:10.994 slat (usec): min=5, max=902, avg=12.73, stdev=15.63 00:40:10.994 clat (usec): min=207, max=43107, avg=897.21, stdev=5079.72 00:40:10.994 lat (usec): min=217, max=43121, avg=909.92, stdev=5082.11 00:40:10.994 clat percentiles (usec): 00:40:10.994 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:40:10.994 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:40:10.994 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 289], 00:40:10.994 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:40:10.994 | 99.99th=[43254] 00:40:10.994 bw ( KiB/s): min= 96, max=15216, per=24.62%, avg=5090.67, stdev=7120.13, samples=6 00:40:10.994 iops : min= 24, max= 3804, avg=1272.67, stdev=1780.03, samples=6 00:40:10.994 lat (usec) : 250=46.36%, 500=51.50%, 750=0.39%, 1000=0.08% 00:40:10.994 lat (msec) : 2=0.08%, 50=1.57% 00:40:10.994 cpu : usr=0.85%, sys=2.02%, ctx=3834, majf=0, minf=2 00:40:10.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:10.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.994 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.994 issued rwts: total=3831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:10.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:10.994 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=450319: Mon Nov 18 20:41:22 2024 00:40:10.994 read: IOPS=314, BW=1258KiB/s (1288kB/s)(4776KiB/3797msec) 00:40:10.994 slat (usec): min=4, max=12854, avg=32.98, stdev=498.47 00:40:10.994 clat (usec): min=193, max=41921, avg=3136.23, stdev=10420.55 00:40:10.994 lat (usec): min=198, max=54017, avg=3169.22, stdev=10521.85 00:40:10.994 clat percentiles (usec): 00:40:10.994 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 229], 00:40:10.994 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 260], 60.00th=[ 277], 00:40:10.994 | 70.00th=[ 302], 80.00th=[ 330], 90.00th=[ 388], 95.00th=[41157], 00:40:10.994 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:40:10.994 | 99.99th=[41681] 00:40:10.994 bw ( KiB/s): min= 96, max= 5112, per=6.56%, avg=1356.00, stdev=1943.57, samples=7 00:40:10.994 iops : min= 24, max= 1278, avg=339.00, stdev=485.89, samples=7 00:40:10.994 lat (usec) : 250=43.18%, 500=49.54%, 750=0.17% 00:40:10.994 lat (msec) : 50=7.03% 00:40:10.994 cpu : usr=0.16%, sys=0.29%, ctx=1198, majf=0, minf=2 00:40:10.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:10.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.994 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.994 issued rwts: total=1195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:10.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:10.994 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=450320: Mon Nov 18 20:41:22 2024 00:40:10.994 read: IOPS=4230, BW=16.5MiB/s (17.3MB/s)(53.1MiB/3212msec) 00:40:10.994 slat (nsec): min=4461, max=64372, avg=7749.84, stdev=4075.87 00:40:10.994 clat (usec): min=174, max=4006, avg=225.30, stdev=47.19 00:40:10.994 lat (usec): min=180, max=4012, avg=233.04, stdev=48.46 00:40:10.994 clat percentiles (usec): 00:40:10.994 | 1.00th=[ 200], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 206], 00:40:10.994 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 223], 00:40:10.994 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 258], 00:40:10.994 | 99.00th=[ 388], 99.50th=[ 474], 99.90th=[ 611], 99.95th=[ 619], 00:40:10.994 | 99.99th=[ 963] 00:40:10.994 bw ( KiB/s): min=15264, max=18536, per=82.11%, avg=16980.00, stdev=1193.34, samples=6 00:40:10.994 iops : min= 3816, max= 4634, avg=4245.00, stdev=298.34, samples=6 00:40:10.994 lat (usec) : 250=90.47%, 500=9.16%, 750=0.35%, 1000=0.01% 00:40:10.994 lat (msec) : 10=0.01% 00:40:10.994 cpu : usr=1.18%, sys=3.86%, ctx=13590, majf=0, minf=1 00:40:10.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:10.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.994 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.994 issued rwts: total=13588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:10.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:10.994 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=450321: Mon Nov 18 20:41:22 2024 00:40:10.994 read: IOPS=346, BW=1383KiB/s (1416kB/s)(4072KiB/2944msec) 00:40:10.994 slat (nsec): min=4726, max=69689, avg=13017.13, stdev=7845.72 00:40:10.994 clat (usec): min=200, max=41371, avg=2852.71, stdev=9724.45 00:40:10.994 lat (usec): min=206, max=41385, avg=2865.72, stdev=9726.27 00:40:10.994 clat percentiles (usec): 00:40:10.994 | 1.00th=[ 221], 5.00th=[ 247], 10.00th=[ 265], 20.00th=[ 293], 00:40:10.994 | 30.00th=[ 314], 40.00th=[ 338], 50.00th=[ 363], 60.00th=[ 383], 00:40:10.994 | 70.00th=[ 396], 80.00th=[ 420], 90.00th=[ 515], 95.00th=[41157], 00:40:10.994 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:10.994 | 99.99th=[41157] 00:40:10.994 bw ( KiB/s): min= 96, max= 4064, per=7.79%, avg=1611.20, stdev=2078.67, samples=5 00:40:10.994 iops : min= 24, max= 1016, avg=402.80, stdev=519.67, samples=5 00:40:10.994 lat (usec) : 250=5.30%, 500=83.61%, 750=4.51%, 1000=0.10% 00:40:10.994 lat (msec) : 2=0.20%, 50=6.18% 00:40:10.994 cpu : usr=0.14%, sys=0.58%, ctx=1021, majf=0, minf=1 00:40:10.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:10.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.994 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.994 issued rwts: total=1019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:10.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:10.994 00:40:10.994 Run status group 0 (all jobs): 00:40:10.994 READ: bw=20.2MiB/s (21.2MB/s), 1258KiB/s-16.5MiB/s (1288kB/s-17.3MB/s), io=76.7MiB (80.4MB), run=2944-3797msec 00:40:10.994 00:40:10.994 Disk stats (read/write): 00:40:10.994 nvme0n1: ios=3869/0, merge=0/0, ticks=3889/0, in_queue=3889, util=100.00% 00:40:10.994 nvme0n2: ios=1207/0, merge=0/0, ticks=3741/0, in_queue=3741, util=97.27% 00:40:10.994 nvme0n3: ios=13251/0, merge=0/0, ticks=3235/0, in_queue=3235, util=100.00% 00:40:10.994 nvme0n4: ios=1058/0, merge=0/0, ticks=3610/0, in_queue=3610, util=100.00% 00:40:11.252 20:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:11.252 20:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:11.510 20:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:11.510 20:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:11.769 20:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:11.769 20:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:12.027 20:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:12.027 20:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:12.285 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:12.285 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 450230 00:40:12.285 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:12.285 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:12.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:12.544 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:12.544 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:40:12.544 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:12.544 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:12.544 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:12.544 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:12.544 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:40:12.544 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:12.544 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:12.544 nvmf hotplug test: fio failed as expected 00:40:12.544 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:12.804 rmmod nvme_tcp 00:40:12.804 rmmod nvme_fabrics 00:40:12.804 rmmod nvme_keyring 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 448219 ']' 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 448219 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 448219 ']' 00:40:12.804 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 448219 00:40:12.805 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:40:12.805 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:12.805 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 448219 00:40:12.805 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:12.805 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:12.805 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 448219' 00:40:12.805 killing process with pid 448219 00:40:12.805 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 448219 00:40:12.805 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 448219 00:40:13.064 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:13.064 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:13.064 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:13.064 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:13.064 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:40:13.064 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:13.064 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:40:13.064 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:13.064 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:13.064 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:13.064 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:13.064 20:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:14.968 20:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:15.228 00:40:15.228 real 0m23.605s 00:40:15.228 user 1m6.318s 00:40:15.228 sys 0m10.336s 00:40:15.228 20:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:15.228 20:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:15.228 ************************************ 00:40:15.228 END TEST nvmf_fio_target 00:40:15.228 ************************************ 00:40:15.228 20:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:15.228 20:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:15.228 20:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:15.228 20:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:15.228 ************************************ 00:40:15.228 START TEST nvmf_bdevio 00:40:15.228 ************************************ 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:15.228 * Looking for test storage... 00:40:15.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:15.228 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:15.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.229 --rc genhtml_branch_coverage=1 00:40:15.229 --rc genhtml_function_coverage=1 00:40:15.229 --rc genhtml_legend=1 00:40:15.229 --rc geninfo_all_blocks=1 00:40:15.229 --rc geninfo_unexecuted_blocks=1 00:40:15.229 00:40:15.229 ' 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:15.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.229 --rc genhtml_branch_coverage=1 00:40:15.229 --rc genhtml_function_coverage=1 00:40:15.229 --rc genhtml_legend=1 00:40:15.229 --rc geninfo_all_blocks=1 00:40:15.229 --rc geninfo_unexecuted_blocks=1 00:40:15.229 00:40:15.229 ' 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:15.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.229 --rc genhtml_branch_coverage=1 00:40:15.229 --rc genhtml_function_coverage=1 00:40:15.229 --rc genhtml_legend=1 00:40:15.229 --rc geninfo_all_blocks=1 00:40:15.229 --rc geninfo_unexecuted_blocks=1 00:40:15.229 00:40:15.229 ' 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:15.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.229 --rc genhtml_branch_coverage=1 00:40:15.229 --rc genhtml_function_coverage=1 00:40:15.229 --rc genhtml_legend=1 00:40:15.229 --rc geninfo_all_blocks=1 00:40:15.229 --rc geninfo_unexecuted_blocks=1 00:40:15.229 00:40:15.229 ' 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:15.229 20:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:17.766 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:17.766 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:17.766 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:17.766 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:17.766 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:17.766 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:17.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:17.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:17.767 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:17.767 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:17.767 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:17.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:17.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:40:17.768 00:40:17.768 --- 10.0.0.2 ping statistics --- 00:40:17.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:17.768 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:17.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:17.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:40:17.768 00:40:17.768 --- 10.0.0.1 ping statistics --- 00:40:17.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:17.768 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=452946 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 452946 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 452946 ']' 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:17.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:17.768 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:17.768 [2024-11-18 20:41:29.594319] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:17.768 [2024-11-18 20:41:29.595374] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:40:17.768 [2024-11-18 20:41:29.595443] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:17.768 [2024-11-18 20:41:29.668553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:17.768 [2024-11-18 20:41:29.715905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:17.768 [2024-11-18 20:41:29.715980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:17.768 [2024-11-18 20:41:29.716005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:17.768 [2024-11-18 20:41:29.716016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:17.768 [2024-11-18 20:41:29.716026] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:17.768 [2024-11-18 20:41:29.717462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:17.768 [2024-11-18 20:41:29.717526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:17.768 [2024-11-18 20:41:29.717594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:17.768 [2024-11-18 20:41:29.717596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:18.028 [2024-11-18 20:41:29.802204] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:18.028 [2024-11-18 20:41:29.802419] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:18.028 [2024-11-18 20:41:29.802713] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:18.028 [2024-11-18 20:41:29.803303] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:18.028 [2024-11-18 20:41:29.803533] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:18.028 [2024-11-18 20:41:29.854296] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:18.028 Malloc0 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:18.028 [2024-11-18 20:41:29.918511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:18.028 { 00:40:18.028 "params": { 00:40:18.028 "name": "Nvme$subsystem", 00:40:18.028 "trtype": "$TEST_TRANSPORT", 00:40:18.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:18.028 "adrfam": "ipv4", 00:40:18.028 "trsvcid": "$NVMF_PORT", 00:40:18.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:18.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:18.028 "hdgst": ${hdgst:-false}, 00:40:18.028 "ddgst": ${ddgst:-false} 00:40:18.028 }, 00:40:18.028 "method": "bdev_nvme_attach_controller" 00:40:18.028 } 00:40:18.028 EOF 00:40:18.028 )") 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:40:18.028 20:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:18.028 "params": { 00:40:18.028 "name": "Nvme1", 00:40:18.028 "trtype": "tcp", 00:40:18.028 "traddr": "10.0.0.2", 00:40:18.028 "adrfam": "ipv4", 00:40:18.028 "trsvcid": "4420", 00:40:18.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:18.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:18.028 "hdgst": false, 00:40:18.028 "ddgst": false 00:40:18.028 }, 00:40:18.028 "method": "bdev_nvme_attach_controller" 00:40:18.028 }' 00:40:18.028 [2024-11-18 20:41:29.967188] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:40:18.028 [2024-11-18 20:41:29.967277] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453086 ] 00:40:18.286 [2024-11-18 20:41:30.038633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:18.286 [2024-11-18 20:41:30.090878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:18.286 [2024-11-18 20:41:30.090933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:18.286 [2024-11-18 20:41:30.090936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.544 I/O targets: 00:40:18.544 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:18.544 00:40:18.544 00:40:18.544 CUnit - A unit testing framework for C - Version 2.1-3 00:40:18.544 http://cunit.sourceforge.net/ 00:40:18.544 00:40:18.544 00:40:18.544 Suite: bdevio tests on: Nvme1n1 00:40:18.544 Test: blockdev write read block ...passed 00:40:18.544 Test: blockdev write zeroes read block ...passed 00:40:18.544 Test: blockdev write zeroes read no split ...passed 00:40:18.544 Test: blockdev write zeroes read split ...passed 00:40:18.544 Test: blockdev write zeroes read split partial ...passed 00:40:18.544 Test: blockdev reset ...[2024-11-18 20:41:30.541380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:18.545 [2024-11-18 20:41:30.541499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1143b70 (9): Bad file descriptor 00:40:18.545 [2024-11-18 20:41:30.545741] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:18.545 passed 00:40:18.803 Test: blockdev write read 8 blocks ...passed 00:40:18.803 Test: blockdev write read size > 128k ...passed 00:40:18.803 Test: blockdev write read invalid size ...passed 00:40:18.803 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:18.803 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:18.803 Test: blockdev write read max offset ...passed 00:40:18.803 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:18.803 Test: blockdev writev readv 8 blocks ...passed 00:40:18.803 Test: blockdev writev readv 30 x 1block ...passed 00:40:19.061 Test: blockdev writev readv block ...passed 00:40:19.061 Test: blockdev writev readv size > 128k ...passed 00:40:19.061 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:19.061 Test: blockdev comparev and writev ...[2024-11-18 20:41:30.842866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:19.061 [2024-11-18 20:41:30.842915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:19.061 [2024-11-18 20:41:30.842940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:19.061 [2024-11-18 20:41:30.842957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:19.061 [2024-11-18 20:41:30.843348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:19.061 [2024-11-18 20:41:30.843372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:19.061 [2024-11-18 20:41:30.843394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:19.061 [2024-11-18 20:41:30.843410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:19.061 [2024-11-18 20:41:30.843790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:19.061 [2024-11-18 20:41:30.843817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:19.061 [2024-11-18 20:41:30.843839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:19.061 [2024-11-18 20:41:30.843856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:19.061 [2024-11-18 20:41:30.844217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:19.061 [2024-11-18 20:41:30.844242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:19.061 [2024-11-18 20:41:30.844264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:19.061 [2024-11-18 20:41:30.844280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:19.061 passed 00:40:19.061 Test: blockdev nvme passthru rw ...passed 00:40:19.061 Test: blockdev nvme passthru vendor specific ...[2024-11-18 20:41:30.925884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:19.061 [2024-11-18 20:41:30.925911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:19.061 [2024-11-18 20:41:30.926054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:19.062 [2024-11-18 20:41:30.926078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:19.062 [2024-11-18 20:41:30.926233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:19.062 [2024-11-18 20:41:30.926257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:19.062 [2024-11-18 20:41:30.926397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:19.062 [2024-11-18 20:41:30.926420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:19.062 passed 00:40:19.062 Test: blockdev nvme admin passthru ...passed 00:40:19.062 Test: blockdev copy ...passed 00:40:19.062 00:40:19.062 Run Summary: Type Total Ran Passed Failed Inactive 00:40:19.062 suites 1 1 n/a 0 0 00:40:19.062 tests 23 23 23 0 0 00:40:19.062 asserts 152 152 152 0 n/a 00:40:19.062 00:40:19.062 Elapsed time = 1.100 seconds 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:19.320 rmmod nvme_tcp 00:40:19.320 rmmod nvme_fabrics 00:40:19.320 rmmod nvme_keyring 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 452946 ']' 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 452946 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 452946 ']' 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 452946 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 452946 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 452946' 00:40:19.320 killing process with pid 452946 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 452946 00:40:19.320 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 452946 00:40:19.580 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:19.580 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:19.580 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:19.580 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:19.580 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:40:19.580 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:19.580 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:40:19.580 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:19.580 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:19.580 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:19.580 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:19.580 20:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:21.597 20:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:21.597 00:40:21.597 real 0m6.518s 00:40:21.597 user 0m8.851s 00:40:21.597 sys 0m2.514s 00:40:21.597 20:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:21.597 20:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:21.597 ************************************ 00:40:21.597 END TEST nvmf_bdevio 00:40:21.597 ************************************ 00:40:21.597 20:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:21.597 00:40:21.597 real 3m54.838s 00:40:21.597 user 8m53.827s 00:40:21.597 sys 1m25.171s 00:40:21.597 20:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:21.597 20:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:21.597 ************************************ 00:40:21.597 END TEST nvmf_target_core_interrupt_mode 00:40:21.597 ************************************ 00:40:21.597 20:41:33 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:21.597 20:41:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:21.597 20:41:33 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:21.597 20:41:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:21.857 ************************************ 00:40:21.857 START TEST nvmf_interrupt 00:40:21.857 ************************************ 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:21.857 * Looking for test storage... 00:40:21.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:21.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.857 --rc genhtml_branch_coverage=1 00:40:21.857 --rc genhtml_function_coverage=1 00:40:21.857 --rc genhtml_legend=1 00:40:21.857 --rc geninfo_all_blocks=1 00:40:21.857 --rc geninfo_unexecuted_blocks=1 00:40:21.857 00:40:21.857 ' 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:21.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.857 --rc genhtml_branch_coverage=1 00:40:21.857 --rc genhtml_function_coverage=1 00:40:21.857 --rc genhtml_legend=1 00:40:21.857 --rc geninfo_all_blocks=1 00:40:21.857 --rc geninfo_unexecuted_blocks=1 00:40:21.857 00:40:21.857 ' 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:21.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.857 --rc genhtml_branch_coverage=1 00:40:21.857 --rc genhtml_function_coverage=1 00:40:21.857 --rc genhtml_legend=1 00:40:21.857 --rc geninfo_all_blocks=1 00:40:21.857 --rc geninfo_unexecuted_blocks=1 00:40:21.857 00:40:21.857 ' 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:21.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.857 --rc genhtml_branch_coverage=1 00:40:21.857 --rc genhtml_function_coverage=1 00:40:21.857 --rc genhtml_legend=1 00:40:21.857 --rc geninfo_all_blocks=1 00:40:21.857 --rc geninfo_unexecuted_blocks=1 00:40:21.857 00:40:21.857 ' 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:21.857 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:21.858 20:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:24.392 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:24.393 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:24.393 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:24.393 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:24.393 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:24.393 20:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:24.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:24.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:40:24.393 00:40:24.393 --- 10.0.0.2 ping statistics --- 00:40:24.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:24.393 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:24.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:24.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:40:24.393 00:40:24.393 --- 10.0.0.1 ping statistics --- 00:40:24.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:24.393 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=455179 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 455179 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 455179 ']' 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:24.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:24.393 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:24.394 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:24.394 [2024-11-18 20:41:36.109234] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:24.394 [2024-11-18 20:41:36.110376] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:40:24.394 [2024-11-18 20:41:36.110431] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:24.394 [2024-11-18 20:41:36.182845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:24.394 [2024-11-18 20:41:36.229802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:24.394 [2024-11-18 20:41:36.229856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:24.394 [2024-11-18 20:41:36.229869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:24.394 [2024-11-18 20:41:36.229880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:24.394 [2024-11-18 20:41:36.229889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:24.394 [2024-11-18 20:41:36.231347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:24.394 [2024-11-18 20:41:36.231352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:24.394 [2024-11-18 20:41:36.324349] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:24.394 [2024-11-18 20:41:36.324381] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:24.394 [2024-11-18 20:41:36.324611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:24.394 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:24.394 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:40:24.394 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:24.394 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:24.394 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:24.394 20:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:24.394 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:24.394 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:24.394 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:24.394 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:24.394 5000+0 records in 00:40:24.394 5000+0 records out 00:40:24.394 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0149035 s, 687 MB/s 00:40:24.394 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:24.394 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.394 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:24.653 AIO0 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:24.653 [2024-11-18 20:41:36.424030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:24.653 [2024-11-18 20:41:36.448244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 455179 0 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 455179 0 idle 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455179 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455179 -w 256 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455179 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:00.26 reactor_0' 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455179 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:00.26 reactor_0 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 455179 1 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 455179 1 idle 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455179 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455179 -w 256 00:40:24.653 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455183 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:00.00 reactor_1' 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455183 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:00.00 reactor_1 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=455312 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 455179 0 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 455179 0 busy 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455179 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455179 -w 256 00:40:24.912 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455179 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.47 reactor_0' 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455179 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.47 reactor_0 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 455179 1 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 455179 1 busy 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455179 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455179 -w 256 00:40:25.171 20:41:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:25.171 20:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455183 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.26 reactor_1' 00:40:25.171 20:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455183 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.26 reactor_1 00:40:25.171 20:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:25.171 20:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:25.171 20:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:25.171 20:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:25.171 20:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:25.171 20:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:25.171 20:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:25.171 20:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:25.171 20:41:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 455312 00:40:35.143 [2024-11-18 20:41:46.907586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71d90 is same with the state(6) to be set 00:40:35.143 [2024-11-18 20:41:46.907655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71d90 is same with the state(6) to be set 00:40:35.143 [2024-11-18 20:41:46.907679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71d90 is same with the state(6) to be set 00:40:35.143 [2024-11-18 20:41:46.907692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71d90 is same with the state(6) to be set 00:40:35.143 [2024-11-18 20:41:46.907704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71d90 is same with the state(6) to be set 00:40:35.143 Initializing NVMe Controllers 00:40:35.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:35.143 Controller IO queue size 256, less than required. 00:40:35.143 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:35.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:35.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:35.143 Initialization complete. Launching workers. 00:40:35.143 ======================================================== 00:40:35.143 Latency(us) 00:40:35.143 Device Information : IOPS MiB/s Average min max 00:40:35.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13588.60 53.08 18851.71 3758.77 22953.38 00:40:35.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13461.30 52.58 19029.24 4388.28 22947.16 00:40:35.143 ======================================================== 00:40:35.143 Total : 27049.89 105.66 18940.06 3758.77 22953.38 00:40:35.143 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 455179 0 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 455179 0 idle 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455179 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455179 -w 256 00:40:35.143 20:41:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455179 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.21 reactor_0' 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455179 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.21 reactor_0 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 455179 1 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 455179 1 idle 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455179 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455179 -w 256 00:40:35.143 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:35.403 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455183 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.97 reactor_1' 00:40:35.403 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455183 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.97 reactor_1 00:40:35.403 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:35.403 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:35.403 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:35.403 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:35.403 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:35.403 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:35.403 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:35.403 20:41:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:35.403 20:41:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:35.661 20:41:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:35.661 20:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:40:35.661 20:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:35.661 20:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:35.661 20:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 455179 0 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 455179 0 idle 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455179 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455179 -w 256 00:40:37.564 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455179 root 20 0 128.2g 61056 34944 S 6.2 0.1 0:20.31 reactor_0' 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455179 root 20 0 128.2g 61056 34944 S 6.2 0.1 0:20.31 reactor_0 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 455179 1 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 455179 1 idle 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=455179 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 455179 -w 256 00:40:37.823 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 455183 root 20 0 128.2g 61056 34944 S 0.0 0.1 0:10.00 reactor_1' 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 455183 root 20 0 128.2g 61056 34944 S 0.0 0.1 0:10.00 reactor_1 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:38.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:38.082 20:41:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:38.082 rmmod nvme_tcp 00:40:38.082 rmmod nvme_fabrics 00:40:38.082 rmmod nvme_keyring 00:40:38.082 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:38.082 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:38.082 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:38.082 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 455179 ']' 00:40:38.082 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 455179 00:40:38.082 20:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 455179 ']' 00:40:38.082 20:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 455179 00:40:38.082 20:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:40:38.082 20:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:38.082 20:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 455179 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 455179' 00:40:38.341 killing process with pid 455179 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 455179 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 455179 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:38.341 20:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:40.881 20:41:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:40.881 00:40:40.881 real 0m18.748s 00:40:40.881 user 0m36.755s 00:40:40.881 sys 0m6.777s 00:40:40.881 20:41:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:40.881 20:41:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:40.881 ************************************ 00:40:40.881 END TEST nvmf_interrupt 00:40:40.881 ************************************ 00:40:40.881 00:40:40.881 real 33m10.433s 00:40:40.881 user 87m53.358s 00:40:40.881 sys 8m6.431s 00:40:40.881 20:41:52 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:40.881 20:41:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:40.881 ************************************ 00:40:40.881 END TEST nvmf_tcp 00:40:40.881 ************************************ 00:40:40.881 20:41:52 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:40:40.881 20:41:52 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:40.881 20:41:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:40.881 20:41:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:40.881 20:41:52 -- common/autotest_common.sh@10 -- # set +x 00:40:40.881 ************************************ 00:40:40.881 START TEST spdkcli_nvmf_tcp 00:40:40.881 ************************************ 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:40.881 * Looking for test storage... 00:40:40.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:40.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.881 --rc genhtml_branch_coverage=1 00:40:40.881 --rc genhtml_function_coverage=1 00:40:40.881 --rc genhtml_legend=1 00:40:40.881 --rc geninfo_all_blocks=1 00:40:40.881 --rc geninfo_unexecuted_blocks=1 00:40:40.881 00:40:40.881 ' 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:40.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.881 --rc genhtml_branch_coverage=1 00:40:40.881 --rc genhtml_function_coverage=1 00:40:40.881 --rc genhtml_legend=1 00:40:40.881 --rc geninfo_all_blocks=1 00:40:40.881 --rc geninfo_unexecuted_blocks=1 00:40:40.881 00:40:40.881 ' 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:40.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.881 --rc genhtml_branch_coverage=1 00:40:40.881 --rc genhtml_function_coverage=1 00:40:40.881 --rc genhtml_legend=1 00:40:40.881 --rc geninfo_all_blocks=1 00:40:40.881 --rc geninfo_unexecuted_blocks=1 00:40:40.881 00:40:40.881 ' 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:40.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.881 --rc genhtml_branch_coverage=1 00:40:40.881 --rc genhtml_function_coverage=1 00:40:40.881 --rc genhtml_legend=1 00:40:40.881 --rc geninfo_all_blocks=1 00:40:40.881 --rc geninfo_unexecuted_blocks=1 00:40:40.881 00:40:40.881 ' 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:40.881 20:41:52 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:40.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=457216 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 457216 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 457216 ']' 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:40.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:40.882 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:40.882 [2024-11-18 20:41:52.689489] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:40:40.882 [2024-11-18 20:41:52.689574] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457216 ] 00:40:40.882 [2024-11-18 20:41:52.759209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:40.882 [2024-11-18 20:41:52.809934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:40.882 [2024-11-18 20:41:52.809953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:41.140 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:41.140 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:40:41.140 20:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:41.140 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:41.140 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:41.140 20:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:41.140 20:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:41.140 20:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:41.140 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:41.140 20:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:41.140 20:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:41.140 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:41.140 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:41.140 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:41.140 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:41.140 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:41.140 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:41.140 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:41.140 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:41.140 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:41.140 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:41.140 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:41.140 ' 00:40:43.689 [2024-11-18 20:41:55.630825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:45.062 [2024-11-18 20:41:56.899192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:47.603 [2024-11-18 20:41:59.242244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:49.505 [2024-11-18 20:42:01.268712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:50.879 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:50.879 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:50.879 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:50.879 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:50.879 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:50.879 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:50.879 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:50.879 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:50.879 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:50.879 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:50.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:50.879 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:51.137 20:42:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:51.137 20:42:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:51.137 20:42:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:51.137 20:42:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:51.137 20:42:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:51.137 20:42:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:51.137 20:42:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:51.137 20:42:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:51.395 20:42:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:51.653 20:42:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:51.653 20:42:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:51.653 20:42:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:51.653 20:42:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:51.653 20:42:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:51.653 20:42:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:51.653 20:42:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:51.653 20:42:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:51.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:51.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:51.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:51.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:51.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:51.653 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:51.653 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:51.653 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:51.653 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:51.653 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:51.653 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:51.653 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:51.653 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:51.653 ' 00:40:56.923 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:56.923 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:56.923 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:56.923 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:56.923 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:56.923 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:56.923 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:56.923 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:56.923 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:56.923 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:56.923 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:56.923 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:56.923 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:56.923 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:56.923 20:42:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:56.923 20:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:56.923 20:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:56.923 20:42:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 457216 00:40:56.923 20:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 457216 ']' 00:40:56.923 20:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 457216 00:40:56.924 20:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:40:56.924 20:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:56.924 20:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 457216 00:40:56.924 20:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:56.924 20:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:56.924 20:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 457216' 00:40:56.924 killing process with pid 457216 00:40:56.924 20:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 457216 00:40:56.924 20:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 457216 00:40:57.182 20:42:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:57.182 20:42:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:57.182 20:42:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 457216 ']' 00:40:57.182 20:42:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 457216 00:40:57.182 20:42:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 457216 ']' 00:40:57.182 20:42:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 457216 00:40:57.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (457216) - No such process 00:40:57.182 20:42:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 457216 is not found' 00:40:57.182 Process with pid 457216 is not found 00:40:57.182 20:42:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:57.182 20:42:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:57.182 20:42:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:57.182 00:40:57.182 real 0m16.643s 00:40:57.182 user 0m35.425s 00:40:57.182 sys 0m0.833s 00:40:57.182 20:42:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:57.182 20:42:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:57.182 ************************************ 00:40:57.182 END TEST spdkcli_nvmf_tcp 00:40:57.182 ************************************ 00:40:57.182 20:42:09 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:57.182 20:42:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:57.182 20:42:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:57.182 20:42:09 -- common/autotest_common.sh@10 -- # set +x 00:40:57.182 ************************************ 00:40:57.182 START TEST nvmf_identify_passthru 00:40:57.182 ************************************ 00:40:57.182 20:42:09 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:57.182 * Looking for test storage... 00:40:57.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:57.440 20:42:09 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:57.440 20:42:09 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:40:57.440 20:42:09 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:57.440 20:42:09 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:57.440 20:42:09 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:57.440 20:42:09 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:57.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.440 --rc genhtml_branch_coverage=1 00:40:57.440 --rc genhtml_function_coverage=1 00:40:57.440 --rc genhtml_legend=1 00:40:57.440 --rc geninfo_all_blocks=1 00:40:57.440 --rc geninfo_unexecuted_blocks=1 00:40:57.440 00:40:57.440 ' 00:40:57.440 20:42:09 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:57.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.440 --rc genhtml_branch_coverage=1 00:40:57.440 --rc genhtml_function_coverage=1 00:40:57.440 --rc genhtml_legend=1 00:40:57.440 --rc geninfo_all_blocks=1 00:40:57.440 --rc geninfo_unexecuted_blocks=1 00:40:57.440 00:40:57.440 ' 00:40:57.440 20:42:09 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:57.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.440 --rc genhtml_branch_coverage=1 00:40:57.440 --rc genhtml_function_coverage=1 00:40:57.440 --rc genhtml_legend=1 00:40:57.440 --rc geninfo_all_blocks=1 00:40:57.440 --rc geninfo_unexecuted_blocks=1 00:40:57.440 00:40:57.440 ' 00:40:57.440 20:42:09 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:57.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.440 --rc genhtml_branch_coverage=1 00:40:57.440 --rc genhtml_function_coverage=1 00:40:57.440 --rc genhtml_legend=1 00:40:57.440 --rc geninfo_all_blocks=1 00:40:57.440 --rc geninfo_unexecuted_blocks=1 00:40:57.440 00:40:57.440 ' 00:40:57.440 20:42:09 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:57.440 20:42:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.440 20:42:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.440 20:42:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.440 20:42:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:57.440 20:42:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:57.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:57.440 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:57.440 20:42:09 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:57.440 20:42:09 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:57.440 20:42:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.441 20:42:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.441 20:42:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.441 20:42:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:57.441 20:42:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.441 20:42:09 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:57.441 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:57.441 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:57.441 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:57.441 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:57.441 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:57.441 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:57.441 20:42:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:57.441 20:42:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:57.441 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:57.441 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:57.441 20:42:09 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:57.441 20:42:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:59.342 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:59.342 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:59.342 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:59.343 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:59.343 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:59.343 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:59.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:59.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:40:59.601 00:40:59.601 --- 10.0.0.2 ping statistics --- 00:40:59.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:59.601 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:59.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:59.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:40:59.601 00:40:59.601 --- 10.0.0.1 ping statistics --- 00:40:59.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:59.601 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:59.601 20:42:11 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:59.601 20:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:59.601 20:42:11 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:59.601 20:42:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:59.601 20:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:59.601 20:42:11 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:59.601 20:42:11 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:40:59.601 20:42:11 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:40:59.601 20:42:11 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:40:59.601 20:42:11 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:40:59.601 20:42:11 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:40:59.601 20:42:11 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:59.601 20:42:11 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:59.601 20:42:11 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:40:59.862 20:42:11 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:40:59.862 20:42:11 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:40:59.862 20:42:11 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:40:59.862 20:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:59.862 20:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:59.862 20:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:59.862 20:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:59.862 20:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:41:04.052 20:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:41:04.052 20:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:41:04.052 20:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:41:04.052 20:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:08.249 20:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:41:08.249 20:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:41:08.249 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:08.249 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.249 20:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:41:08.249 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:08.249 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.249 20:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=462486 00:41:08.249 20:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:08.249 20:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:08.249 20:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 462486 00:41:08.249 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 462486 ']' 00:41:08.249 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:08.249 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:08.249 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:08.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:08.249 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:08.249 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.249 [2024-11-18 20:42:20.161786] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:41:08.249 [2024-11-18 20:42:20.161882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:08.249 [2024-11-18 20:42:20.237116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:08.508 [2024-11-18 20:42:20.289072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:08.508 [2024-11-18 20:42:20.289127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:08.508 [2024-11-18 20:42:20.289155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:08.508 [2024-11-18 20:42:20.289166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:08.508 [2024-11-18 20:42:20.289175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:08.508 [2024-11-18 20:42:20.292660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:08.508 [2024-11-18 20:42:20.292730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:08.508 [2024-11-18 20:42:20.292755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:08.508 [2024-11-18 20:42:20.292758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:08.508 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:08.508 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:41:08.508 20:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:08.508 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.508 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.508 INFO: Log level set to 20 00:41:08.508 INFO: Requests: 00:41:08.508 { 00:41:08.508 "jsonrpc": "2.0", 00:41:08.508 "method": "nvmf_set_config", 00:41:08.508 "id": 1, 00:41:08.508 "params": { 00:41:08.508 "admin_cmd_passthru": { 00:41:08.508 "identify_ctrlr": true 00:41:08.508 } 00:41:08.508 } 00:41:08.508 } 00:41:08.508 00:41:08.508 INFO: response: 00:41:08.508 { 00:41:08.508 "jsonrpc": "2.0", 00:41:08.508 "id": 1, 00:41:08.508 "result": true 00:41:08.509 } 00:41:08.509 00:41:08.509 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.509 20:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:08.509 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.509 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.509 INFO: Setting log level to 20 00:41:08.509 INFO: Setting log level to 20 00:41:08.509 INFO: Log level set to 20 00:41:08.509 INFO: Log level set to 20 00:41:08.509 INFO: Requests: 00:41:08.509 { 00:41:08.509 "jsonrpc": "2.0", 00:41:08.509 "method": "framework_start_init", 00:41:08.509 "id": 1 00:41:08.509 } 00:41:08.509 00:41:08.509 INFO: Requests: 00:41:08.509 { 00:41:08.509 "jsonrpc": "2.0", 00:41:08.509 "method": "framework_start_init", 00:41:08.509 "id": 1 00:41:08.509 } 00:41:08.509 00:41:08.509 [2024-11-18 20:42:20.514932] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:08.768 INFO: response: 00:41:08.768 { 00:41:08.768 "jsonrpc": "2.0", 00:41:08.768 "id": 1, 00:41:08.768 "result": true 00:41:08.768 } 00:41:08.768 00:41:08.768 INFO: response: 00:41:08.768 { 00:41:08.768 "jsonrpc": "2.0", 00:41:08.768 "id": 1, 00:41:08.768 "result": true 00:41:08.768 } 00:41:08.768 00:41:08.769 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.769 20:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:08.769 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.769 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.769 INFO: Setting log level to 40 00:41:08.769 INFO: Setting log level to 40 00:41:08.769 INFO: Setting log level to 40 00:41:08.769 [2024-11-18 20:42:20.525229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:08.769 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.769 20:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:08.769 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:08.769 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.769 20:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:41:08.769 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.769 20:42:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:12.138 Nvme0n1 00:41:12.138 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.138 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:12.138 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.138 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:12.138 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.138 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:12.138 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.138 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:12.138 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.138 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:12.138 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.138 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:12.138 [2024-11-18 20:42:23.432075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:12.138 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.138 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:12.138 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.138 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:12.138 [ 00:41:12.138 { 00:41:12.138 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:12.138 "subtype": "Discovery", 00:41:12.138 "listen_addresses": [], 00:41:12.138 "allow_any_host": true, 00:41:12.138 "hosts": [] 00:41:12.138 }, 00:41:12.138 { 00:41:12.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:12.138 "subtype": "NVMe", 00:41:12.138 "listen_addresses": [ 00:41:12.138 { 00:41:12.138 "trtype": "TCP", 00:41:12.138 "adrfam": "IPv4", 00:41:12.138 "traddr": "10.0.0.2", 00:41:12.138 "trsvcid": "4420" 00:41:12.138 } 00:41:12.138 ], 00:41:12.138 "allow_any_host": true, 00:41:12.138 "hosts": [], 00:41:12.139 "serial_number": "SPDK00000000000001", 00:41:12.139 "model_number": "SPDK bdev Controller", 00:41:12.139 "max_namespaces": 1, 00:41:12.139 "min_cntlid": 1, 00:41:12.139 "max_cntlid": 65519, 00:41:12.139 "namespaces": [ 00:41:12.139 { 00:41:12.139 "nsid": 1, 00:41:12.139 "bdev_name": "Nvme0n1", 00:41:12.139 "name": "Nvme0n1", 00:41:12.139 "nguid": "C8FA54A5FEFD479AAD63C2C64610EF33", 00:41:12.139 "uuid": "c8fa54a5-fefd-479a-ad63-c2c64610ef33" 00:41:12.139 } 00:41:12.139 ] 00:41:12.139 } 00:41:12.139 ] 00:41:12.139 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.139 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:12.139 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:12.139 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:12.139 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:41:12.139 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:12.139 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:12.139 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:12.139 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:41:12.139 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:41:12.139 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:41:12.139 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:12.139 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.139 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:12.139 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.139 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:12.139 20:42:23 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:12.139 20:42:23 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:12.139 20:42:23 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:41:12.139 20:42:23 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:12.139 20:42:23 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:41:12.139 20:42:23 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:12.139 20:42:23 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:12.139 rmmod nvme_tcp 00:41:12.139 rmmod nvme_fabrics 00:41:12.139 rmmod nvme_keyring 00:41:12.139 20:42:23 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:12.139 20:42:23 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:41:12.139 20:42:23 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:41:12.139 20:42:23 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 462486 ']' 00:41:12.139 20:42:23 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 462486 00:41:12.139 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 462486 ']' 00:41:12.139 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 462486 00:41:12.139 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:41:12.139 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:12.139 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 462486 00:41:12.139 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:12.139 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:12.139 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 462486' 00:41:12.139 killing process with pid 462486 00:41:12.139 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 462486 00:41:12.139 20:42:23 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 462486 00:41:13.514 20:42:25 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:13.515 20:42:25 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:13.515 20:42:25 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:13.515 20:42:25 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:13.515 20:42:25 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:41:13.515 20:42:25 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:13.515 20:42:25 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:41:13.515 20:42:25 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:13.515 20:42:25 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:13.515 20:42:25 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:13.515 20:42:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:13.515 20:42:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:16.051 20:42:27 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:16.051 00:41:16.051 real 0m18.326s 00:41:16.051 user 0m27.142s 00:41:16.051 sys 0m2.404s 00:41:16.051 20:42:27 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:16.051 20:42:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:16.051 ************************************ 00:41:16.052 END TEST nvmf_identify_passthru 00:41:16.052 ************************************ 00:41:16.052 20:42:27 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:16.052 20:42:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:16.052 20:42:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:16.052 20:42:27 -- common/autotest_common.sh@10 -- # set +x 00:41:16.052 ************************************ 00:41:16.052 START TEST nvmf_dif 00:41:16.052 ************************************ 00:41:16.052 20:42:27 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:16.052 * Looking for test storage... 00:41:16.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:16.052 20:42:27 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:16.052 20:42:27 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:41:16.052 20:42:27 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:16.052 20:42:27 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:16.052 20:42:27 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:16.052 20:42:27 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:16.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:16.052 --rc genhtml_branch_coverage=1 00:41:16.052 --rc genhtml_function_coverage=1 00:41:16.052 --rc genhtml_legend=1 00:41:16.052 --rc geninfo_all_blocks=1 00:41:16.052 --rc geninfo_unexecuted_blocks=1 00:41:16.052 00:41:16.052 ' 00:41:16.052 20:42:27 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:16.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:16.052 --rc genhtml_branch_coverage=1 00:41:16.052 --rc genhtml_function_coverage=1 00:41:16.052 --rc genhtml_legend=1 00:41:16.052 --rc geninfo_all_blocks=1 00:41:16.052 --rc geninfo_unexecuted_blocks=1 00:41:16.052 00:41:16.052 ' 00:41:16.052 20:42:27 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:16.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:16.052 --rc genhtml_branch_coverage=1 00:41:16.052 --rc genhtml_function_coverage=1 00:41:16.052 --rc genhtml_legend=1 00:41:16.052 --rc geninfo_all_blocks=1 00:41:16.052 --rc geninfo_unexecuted_blocks=1 00:41:16.052 00:41:16.052 ' 00:41:16.052 20:42:27 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:16.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:16.052 --rc genhtml_branch_coverage=1 00:41:16.052 --rc genhtml_function_coverage=1 00:41:16.052 --rc genhtml_legend=1 00:41:16.052 --rc geninfo_all_blocks=1 00:41:16.052 --rc geninfo_unexecuted_blocks=1 00:41:16.052 00:41:16.052 ' 00:41:16.052 20:42:27 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:16.052 20:42:27 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:16.052 20:42:27 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:16.052 20:42:27 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:16.052 20:42:27 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:16.052 20:42:27 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:16.052 20:42:27 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:16.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:16.052 20:42:27 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:16.052 20:42:27 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:16.053 20:42:27 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:16.053 20:42:27 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:16.053 20:42:27 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:16.053 20:42:27 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:16.053 20:42:27 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:16.053 20:42:27 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:16.053 20:42:27 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:16.053 20:42:27 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:16.053 20:42:27 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:16.053 20:42:27 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:16.053 20:42:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:16.053 20:42:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:16.053 20:42:27 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:16.053 20:42:27 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:16.053 20:42:27 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:16.053 20:42:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:17.958 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:17.958 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:17.958 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:17.958 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:17.958 20:42:29 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:17.959 20:42:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:18.218 20:42:29 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:18.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:18.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:41:18.218 00:41:18.218 --- 10.0.0.2 ping statistics --- 00:41:18.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:18.218 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:41:18.218 20:42:29 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:18.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:18.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:41:18.218 00:41:18.218 --- 10.0.0.1 ping statistics --- 00:41:18.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:18.218 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:41:18.218 20:42:29 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:18.218 20:42:29 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:41:18.218 20:42:29 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:18.218 20:42:29 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:19.155 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:19.155 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:19.155 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:19.155 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:19.155 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:19.155 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:19.155 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:19.155 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:19.155 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:19.155 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:19.155 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:19.155 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:19.155 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:19.155 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:19.155 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:19.155 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:19.155 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:19.413 20:42:31 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:19.413 20:42:31 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:19.413 20:42:31 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:19.413 20:42:31 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:19.413 20:42:31 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:19.413 20:42:31 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:19.413 20:42:31 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:19.413 20:42:31 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:19.413 20:42:31 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:19.413 20:42:31 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:19.413 20:42:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:19.413 20:42:31 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=465745 00:41:19.413 20:42:31 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:19.413 20:42:31 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 465745 00:41:19.413 20:42:31 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 465745 ']' 00:41:19.413 20:42:31 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:19.413 20:42:31 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:19.413 20:42:31 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:19.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:19.413 20:42:31 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:19.413 20:42:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:19.414 [2024-11-18 20:42:31.401364] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:41:19.414 [2024-11-18 20:42:31.401440] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:19.673 [2024-11-18 20:42:31.473721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:19.673 [2024-11-18 20:42:31.518615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:19.673 [2024-11-18 20:42:31.518688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:19.673 [2024-11-18 20:42:31.518702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:19.673 [2024-11-18 20:42:31.518713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:19.673 [2024-11-18 20:42:31.518723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:19.673 [2024-11-18 20:42:31.519301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:19.933 20:42:31 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:19.933 20:42:31 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:41:19.933 20:42:31 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:19.933 20:42:31 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:19.933 20:42:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:19.933 20:42:31 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:19.933 20:42:31 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:19.933 20:42:31 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:19.933 20:42:31 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.933 20:42:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:19.933 [2024-11-18 20:42:31.716490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:19.933 20:42:31 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.933 20:42:31 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:19.933 20:42:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:19.933 20:42:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:19.933 20:42:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:19.933 ************************************ 00:41:19.933 START TEST fio_dif_1_default 00:41:19.933 ************************************ 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:19.933 bdev_null0 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:19.933 [2024-11-18 20:42:31.776862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:19.933 { 00:41:19.933 "params": { 00:41:19.933 "name": "Nvme$subsystem", 00:41:19.933 "trtype": "$TEST_TRANSPORT", 00:41:19.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:19.933 "adrfam": "ipv4", 00:41:19.933 "trsvcid": "$NVMF_PORT", 00:41:19.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:19.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:19.933 "hdgst": ${hdgst:-false}, 00:41:19.933 "ddgst": ${ddgst:-false} 00:41:19.933 }, 00:41:19.933 "method": "bdev_nvme_attach_controller" 00:41:19.933 } 00:41:19.933 EOF 00:41:19.933 )") 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:19.933 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:19.934 "params": { 00:41:19.934 "name": "Nvme0", 00:41:19.934 "trtype": "tcp", 00:41:19.934 "traddr": "10.0.0.2", 00:41:19.934 "adrfam": "ipv4", 00:41:19.934 "trsvcid": "4420", 00:41:19.934 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:19.934 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:19.934 "hdgst": false, 00:41:19.934 "ddgst": false 00:41:19.934 }, 00:41:19.934 "method": "bdev_nvme_attach_controller" 00:41:19.934 }' 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:19.934 20:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.193 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:20.193 fio-3.35 00:41:20.193 Starting 1 thread 00:41:32.401 00:41:32.401 filename0: (groupid=0, jobs=1): err= 0: pid=465973: Mon Nov 18 20:42:42 2024 00:41:32.401 read: IOPS=158, BW=634KiB/s (649kB/s)(6352KiB/10024msec) 00:41:32.401 slat (nsec): min=4061, max=40101, avg=9355.94, stdev=2864.58 00:41:32.401 clat (usec): min=545, max=47381, avg=25218.63, stdev=19789.78 00:41:32.401 lat (usec): min=553, max=47393, avg=25227.99, stdev=19789.78 00:41:32.401 clat percentiles (usec): 00:41:32.401 | 1.00th=[ 578], 5.00th=[ 594], 10.00th=[ 603], 20.00th=[ 619], 00:41:32.401 | 30.00th=[ 668], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:41:32.401 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:32.401 | 99.00th=[41157], 99.50th=[41681], 99.90th=[47449], 99.95th=[47449], 00:41:32.401 | 99.99th=[47449] 00:41:32.401 bw ( KiB/s): min= 384, max= 1024, per=99.89%, avg=633.60, stdev=243.39, samples=20 00:41:32.401 iops : min= 96, max= 256, avg=158.40, stdev=60.85, samples=20 00:41:32.401 lat (usec) : 750=38.98%, 1000=0.31% 00:41:32.401 lat (msec) : 50=60.71% 00:41:32.401 cpu : usr=91.56%, sys=8.12%, ctx=30, majf=0, minf=232 00:41:32.401 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:32.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.401 issued rwts: total=1588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.401 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:32.401 00:41:32.401 Run status group 0 (all jobs): 00:41:32.401 READ: bw=634KiB/s (649kB/s), 634KiB/s-634KiB/s (649kB/s-649kB/s), io=6352KiB (6504kB), run=10024-10024msec 00:41:32.401 20:42:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:32.401 20:42:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:32.401 20:42:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:32.401 20:42:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:32.401 20:42:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:32.401 20:42:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:32.401 20:42:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.401 20:42:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:32.401 20:42:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.402 00:41:32.402 real 0m11.016s 00:41:32.402 user 0m10.079s 00:41:32.402 sys 0m1.095s 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:32.402 ************************************ 00:41:32.402 END TEST fio_dif_1_default 00:41:32.402 ************************************ 00:41:32.402 20:42:42 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:32.402 20:42:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:32.402 20:42:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:32.402 20:42:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:32.402 ************************************ 00:41:32.402 START TEST fio_dif_1_multi_subsystems 00:41:32.402 ************************************ 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.402 bdev_null0 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.402 [2024-11-18 20:42:42.832736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.402 bdev_null1 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:32.402 { 00:41:32.402 "params": { 00:41:32.402 "name": "Nvme$subsystem", 00:41:32.402 "trtype": "$TEST_TRANSPORT", 00:41:32.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:32.402 "adrfam": "ipv4", 00:41:32.402 "trsvcid": "$NVMF_PORT", 00:41:32.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:32.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:32.402 "hdgst": ${hdgst:-false}, 00:41:32.402 "ddgst": ${ddgst:-false} 00:41:32.402 }, 00:41:32.402 "method": "bdev_nvme_attach_controller" 00:41:32.402 } 00:41:32.402 EOF 00:41:32.402 )") 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:32.402 { 00:41:32.402 "params": { 00:41:32.402 "name": "Nvme$subsystem", 00:41:32.402 "trtype": "$TEST_TRANSPORT", 00:41:32.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:32.402 "adrfam": "ipv4", 00:41:32.402 "trsvcid": "$NVMF_PORT", 00:41:32.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:32.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:32.402 "hdgst": ${hdgst:-false}, 00:41:32.402 "ddgst": ${ddgst:-false} 00:41:32.402 }, 00:41:32.402 "method": "bdev_nvme_attach_controller" 00:41:32.402 } 00:41:32.402 EOF 00:41:32.402 )") 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:41:32.402 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:32.402 "params": { 00:41:32.402 "name": "Nvme0", 00:41:32.402 "trtype": "tcp", 00:41:32.402 "traddr": "10.0.0.2", 00:41:32.402 "adrfam": "ipv4", 00:41:32.402 "trsvcid": "4420", 00:41:32.402 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:32.402 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:32.402 "hdgst": false, 00:41:32.402 "ddgst": false 00:41:32.402 }, 00:41:32.403 "method": "bdev_nvme_attach_controller" 00:41:32.403 },{ 00:41:32.403 "params": { 00:41:32.403 "name": "Nvme1", 00:41:32.403 "trtype": "tcp", 00:41:32.403 "traddr": "10.0.0.2", 00:41:32.403 "adrfam": "ipv4", 00:41:32.403 "trsvcid": "4420", 00:41:32.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:32.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:32.403 "hdgst": false, 00:41:32.403 "ddgst": false 00:41:32.403 }, 00:41:32.403 "method": "bdev_nvme_attach_controller" 00:41:32.403 }' 00:41:32.403 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:32.403 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:32.403 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:32.403 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:32.403 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:32.403 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:32.403 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:32.403 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:32.403 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:32.403 20:42:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:32.403 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:32.403 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:32.403 fio-3.35 00:41:32.403 Starting 2 threads 00:41:42.376 00:41:42.376 filename0: (groupid=0, jobs=1): err= 0: pid=467370: Mon Nov 18 20:42:53 2024 00:41:42.376 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:41:42.376 slat (nsec): min=5334, max=30440, avg=9697.89, stdev=2961.37 00:41:42.376 clat (usec): min=40827, max=47093, avg=40994.49, stdev=391.45 00:41:42.376 lat (usec): min=40835, max=47109, avg=41004.19, stdev=391.52 00:41:42.376 clat percentiles (usec): 00:41:42.376 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:42.376 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:42.376 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:42.376 | 99.00th=[41157], 99.50th=[41157], 99.90th=[46924], 99.95th=[46924], 00:41:42.376 | 99.99th=[46924] 00:41:42.376 bw ( KiB/s): min= 384, max= 416, per=49.75%, avg=388.80, stdev=11.72, samples=20 00:41:42.376 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:41:42.376 lat (msec) : 50=100.00% 00:41:42.376 cpu : usr=94.39%, sys=5.24%, ctx=49, majf=0, minf=193 00:41:42.376 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:42.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:42.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:42.376 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:42.376 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:42.376 filename1: (groupid=0, jobs=1): err= 0: pid=467371: Mon Nov 18 20:42:53 2024 00:41:42.376 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:41:42.376 slat (nsec): min=7109, max=28333, avg=9573.17, stdev=2576.39 00:41:42.376 clat (usec): min=40909, max=48039, avg=40999.18, stdev=451.17 00:41:42.376 lat (usec): min=40917, max=48056, avg=41008.76, stdev=451.29 00:41:42.376 clat percentiles (usec): 00:41:42.376 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:42.376 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:42.376 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:42.376 | 99.00th=[41157], 99.50th=[41157], 99.90th=[47973], 99.95th=[47973], 00:41:42.376 | 99.99th=[47973] 00:41:42.376 bw ( KiB/s): min= 384, max= 416, per=49.75%, avg=388.80, stdev=11.72, samples=20 00:41:42.376 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:41:42.376 lat (msec) : 50=100.00% 00:41:42.376 cpu : usr=95.02%, sys=4.70%, ctx=7, majf=0, minf=85 00:41:42.376 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:42.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:42.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:42.376 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:42.376 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:42.376 00:41:42.376 Run status group 0 (all jobs): 00:41:42.376 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10011-10012msec 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:42.376 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.377 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:42.377 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.377 00:41:42.377 real 0m11.358s 00:41:42.377 user 0m20.265s 00:41:42.377 sys 0m1.292s 00:41:42.377 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:42.377 20:42:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:42.377 ************************************ 00:41:42.377 END TEST fio_dif_1_multi_subsystems 00:41:42.377 ************************************ 00:41:42.377 20:42:54 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:42.377 20:42:54 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:42.377 20:42:54 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:42.377 20:42:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:42.377 ************************************ 00:41:42.377 START TEST fio_dif_rand_params 00:41:42.377 ************************************ 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:42.377 bdev_null0 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:42.377 [2024-11-18 20:42:54.235884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:42.377 { 00:41:42.377 "params": { 00:41:42.377 "name": "Nvme$subsystem", 00:41:42.377 "trtype": "$TEST_TRANSPORT", 00:41:42.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:42.377 "adrfam": "ipv4", 00:41:42.377 "trsvcid": "$NVMF_PORT", 00:41:42.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:42.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:42.377 "hdgst": ${hdgst:-false}, 00:41:42.377 "ddgst": ${ddgst:-false} 00:41:42.377 }, 00:41:42.377 "method": "bdev_nvme_attach_controller" 00:41:42.377 } 00:41:42.377 EOF 00:41:42.377 )") 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:42.377 "params": { 00:41:42.377 "name": "Nvme0", 00:41:42.377 "trtype": "tcp", 00:41:42.377 "traddr": "10.0.0.2", 00:41:42.377 "adrfam": "ipv4", 00:41:42.377 "trsvcid": "4420", 00:41:42.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:42.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:42.377 "hdgst": false, 00:41:42.377 "ddgst": false 00:41:42.377 }, 00:41:42.377 "method": "bdev_nvme_attach_controller" 00:41:42.377 }' 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:42.377 20:42:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:42.635 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:42.635 ... 00:41:42.635 fio-3.35 00:41:42.635 Starting 3 threads 00:41:49.196 00:41:49.196 filename0: (groupid=0, jobs=1): err= 0: pid=468766: Mon Nov 18 20:43:00 2024 00:41:49.196 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(131MiB/5004msec) 00:41:49.196 slat (nsec): min=5117, max=63526, avg=13648.67, stdev=4246.80 00:41:49.197 clat (usec): min=5751, max=55274, avg=14345.81, stdev=6204.97 00:41:49.197 lat (usec): min=5758, max=55286, avg=14359.46, stdev=6204.97 00:41:49.197 clat percentiles (usec): 00:41:49.197 | 1.00th=[ 5997], 5.00th=[ 8979], 10.00th=[10683], 20.00th=[11731], 00:41:49.197 | 30.00th=[12387], 40.00th=[13042], 50.00th=[13566], 60.00th=[14222], 00:41:49.197 | 70.00th=[15008], 80.00th=[15664], 90.00th=[16319], 95.00th=[17171], 00:41:49.197 | 99.00th=[51119], 99.50th=[53216], 99.90th=[55313], 99.95th=[55313], 00:41:49.197 | 99.99th=[55313] 00:41:49.197 bw ( KiB/s): min=20777, max=30720, per=30.92%, avg=26679.30, stdev=3229.38, samples=10 00:41:49.197 iops : min= 162, max= 240, avg=208.40, stdev=25.29, samples=10 00:41:49.197 lat (msec) : 10=7.56%, 20=89.86%, 50=1.44%, 100=1.15% 00:41:49.197 cpu : usr=93.32%, sys=6.18%, ctx=11, majf=0, minf=115 00:41:49.197 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.197 issued rwts: total=1045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.197 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:49.197 filename0: (groupid=0, jobs=1): err= 0: pid=468767: Mon Nov 18 20:43:00 2024 00:41:49.197 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(144MiB/5046msec) 00:41:49.197 slat (nsec): min=5303, max=86731, avg=15060.21, stdev=5062.65 00:41:49.197 clat (usec): min=4561, max=53178, avg=13128.25, stdev=6263.31 00:41:49.197 lat (usec): min=4573, max=53190, avg=13143.31, stdev=6263.10 00:41:49.197 clat percentiles (usec): 00:41:49.197 | 1.00th=[ 5014], 5.00th=[ 8455], 10.00th=[10159], 20.00th=[11076], 00:41:49.197 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:41:49.197 | 70.00th=[13173], 80.00th=[13829], 90.00th=[15008], 95.00th=[15795], 00:41:49.197 | 99.00th=[51643], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:41:49.197 | 99.99th=[53216] 00:41:49.197 bw ( KiB/s): min=23808, max=32768, per=33.98%, avg=29316.90, stdev=2860.20, samples=10 00:41:49.197 iops : min= 186, max= 256, avg=229.00, stdev=22.42, samples=10 00:41:49.197 lat (msec) : 10=9.58%, 20=87.89%, 50=1.22%, 100=1.31% 00:41:49.197 cpu : usr=93.18%, sys=6.26%, ctx=30, majf=0, minf=117 00:41:49.197 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.197 issued rwts: total=1148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.197 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:49.197 filename0: (groupid=0, jobs=1): err= 0: pid=468768: Mon Nov 18 20:43:00 2024 00:41:49.197 read: IOPS=239, BW=29.9MiB/s (31.4MB/s)(151MiB/5047msec) 00:41:49.197 slat (nsec): min=5205, max=52824, avg=14672.07, stdev=4716.73 00:41:49.197 clat (usec): min=6297, max=57522, avg=12469.93, stdev=5563.56 00:41:49.197 lat (usec): min=6309, max=57544, avg=12484.60, stdev=5563.47 00:41:49.197 clat percentiles (usec): 00:41:49.197 | 1.00th=[ 7308], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10683], 00:41:49.197 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:41:49.197 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13960], 95.00th=[14877], 00:41:49.197 | 99.00th=[49021], 99.50th=[50070], 99.90th=[57410], 99.95th=[57410], 00:41:49.197 | 99.99th=[57410] 00:41:49.197 bw ( KiB/s): min=18395, max=34816, per=35.78%, avg=30869.90, stdev=4562.32, samples=10 00:41:49.197 iops : min= 143, max= 272, avg=241.10, stdev=35.86, samples=10 00:41:49.197 lat (msec) : 10=12.41%, 20=85.69%, 50=1.08%, 100=0.83% 00:41:49.197 cpu : usr=91.97%, sys=7.51%, ctx=17, majf=0, minf=92 00:41:49.197 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.197 issued rwts: total=1209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.197 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:49.197 00:41:49.197 Run status group 0 (all jobs): 00:41:49.197 READ: bw=84.3MiB/s (88.3MB/s), 26.1MiB/s-29.9MiB/s (27.4MB/s-31.4MB/s), io=425MiB (446MB), run=5004-5047msec 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.197 bdev_null0 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.197 [2024-11-18 20:43:00.501387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.197 bdev_null1 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.197 bdev_null2 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:49.197 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:49.198 { 00:41:49.198 "params": { 00:41:49.198 "name": "Nvme$subsystem", 00:41:49.198 "trtype": "$TEST_TRANSPORT", 00:41:49.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:49.198 "adrfam": "ipv4", 00:41:49.198 "trsvcid": "$NVMF_PORT", 00:41:49.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:49.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:49.198 "hdgst": ${hdgst:-false}, 00:41:49.198 "ddgst": ${ddgst:-false} 00:41:49.198 }, 00:41:49.198 "method": "bdev_nvme_attach_controller" 00:41:49.198 } 00:41:49.198 EOF 00:41:49.198 )") 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:49.198 { 00:41:49.198 "params": { 00:41:49.198 "name": "Nvme$subsystem", 00:41:49.198 "trtype": "$TEST_TRANSPORT", 00:41:49.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:49.198 "adrfam": "ipv4", 00:41:49.198 "trsvcid": "$NVMF_PORT", 00:41:49.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:49.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:49.198 "hdgst": ${hdgst:-false}, 00:41:49.198 "ddgst": ${ddgst:-false} 00:41:49.198 }, 00:41:49.198 "method": "bdev_nvme_attach_controller" 00:41:49.198 } 00:41:49.198 EOF 00:41:49.198 )") 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:49.198 { 00:41:49.198 "params": { 00:41:49.198 "name": "Nvme$subsystem", 00:41:49.198 "trtype": "$TEST_TRANSPORT", 00:41:49.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:49.198 "adrfam": "ipv4", 00:41:49.198 "trsvcid": "$NVMF_PORT", 00:41:49.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:49.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:49.198 "hdgst": ${hdgst:-false}, 00:41:49.198 "ddgst": ${ddgst:-false} 00:41:49.198 }, 00:41:49.198 "method": "bdev_nvme_attach_controller" 00:41:49.198 } 00:41:49.198 EOF 00:41:49.198 )") 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:49.198 "params": { 00:41:49.198 "name": "Nvme0", 00:41:49.198 "trtype": "tcp", 00:41:49.198 "traddr": "10.0.0.2", 00:41:49.198 "adrfam": "ipv4", 00:41:49.198 "trsvcid": "4420", 00:41:49.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:49.198 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:49.198 "hdgst": false, 00:41:49.198 "ddgst": false 00:41:49.198 }, 00:41:49.198 "method": "bdev_nvme_attach_controller" 00:41:49.198 },{ 00:41:49.198 "params": { 00:41:49.198 "name": "Nvme1", 00:41:49.198 "trtype": "tcp", 00:41:49.198 "traddr": "10.0.0.2", 00:41:49.198 "adrfam": "ipv4", 00:41:49.198 "trsvcid": "4420", 00:41:49.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:49.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:49.198 "hdgst": false, 00:41:49.198 "ddgst": false 00:41:49.198 }, 00:41:49.198 "method": "bdev_nvme_attach_controller" 00:41:49.198 },{ 00:41:49.198 "params": { 00:41:49.198 "name": "Nvme2", 00:41:49.198 "trtype": "tcp", 00:41:49.198 "traddr": "10.0.0.2", 00:41:49.198 "adrfam": "ipv4", 00:41:49.198 "trsvcid": "4420", 00:41:49.198 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:49.198 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:49.198 "hdgst": false, 00:41:49.198 "ddgst": false 00:41:49.198 }, 00:41:49.198 "method": "bdev_nvme_attach_controller" 00:41:49.198 }' 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:49.198 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:49.199 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:49.199 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:49.199 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:49.199 20:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:49.199 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:49.199 ... 00:41:49.199 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:49.199 ... 00:41:49.199 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:49.199 ... 00:41:49.199 fio-3.35 00:41:49.199 Starting 24 threads 00:42:01.401 00:42:01.401 filename0: (groupid=0, jobs=1): err= 0: pid=469621: Mon Nov 18 20:43:11 2024 00:42:01.401 read: IOPS=483, BW=1935KiB/s (1982kB/s)(18.9MiB/10020msec) 00:42:01.401 slat (usec): min=6, max=136, avg=48.83, stdev=24.62 00:42:01.401 clat (usec): min=8250, max=40629, avg=32649.38, stdev=2058.24 00:42:01.401 lat (usec): min=8260, max=40675, avg=32698.22, stdev=2059.11 00:42:01.401 clat percentiles (usec): 00:42:01.401 | 1.00th=[20055], 5.00th=[31851], 10.00th=[32113], 20.00th=[32637], 00:42:01.401 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:42:01.401 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.401 | 99.00th=[34341], 99.50th=[34341], 99.90th=[39584], 99.95th=[39584], 00:42:01.401 | 99.99th=[40633] 00:42:01.401 bw ( KiB/s): min= 1792, max= 2176, per=4.20%, avg=1932.80, stdev=70.91, samples=20 00:42:01.401 iops : min= 448, max= 544, avg=483.20, stdev=17.73, samples=20 00:42:01.401 lat (msec) : 10=0.33%, 20=0.66%, 50=99.01% 00:42:01.401 cpu : usr=97.37%, sys=1.76%, ctx=172, majf=0, minf=41 00:42:01.401 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:42:01.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.401 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.401 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.401 filename0: (groupid=0, jobs=1): err= 0: pid=469622: Mon Nov 18 20:43:11 2024 00:42:01.401 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10008msec) 00:42:01.401 slat (nsec): min=8699, max=70305, avg=32559.71, stdev=10368.13 00:42:01.401 clat (usec): min=17422, max=41646, avg=32990.37, stdev=1014.82 00:42:01.401 lat (usec): min=17455, max=41674, avg=33022.93, stdev=1014.16 00:42:01.401 clat percentiles (usec): 00:42:01.401 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:42:01.401 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:42:01.401 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.401 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[41157], 00:42:01.401 | 99.99th=[41681] 00:42:01.401 bw ( KiB/s): min= 1904, max= 2032, per=4.18%, avg=1925.60, stdev=26.10, samples=20 00:42:01.401 iops : min= 476, max= 508, avg=481.40, stdev= 6.52, samples=20 00:42:01.401 lat (msec) : 20=0.33%, 50=99.67% 00:42:01.401 cpu : usr=98.59%, sys=1.02%, ctx=14, majf=0, minf=43 00:42:01.401 IO depths : 1=0.1%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:42:01.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.401 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.401 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.401 filename0: (groupid=0, jobs=1): err= 0: pid=469623: Mon Nov 18 20:43:11 2024 00:42:01.401 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10021msec) 00:42:01.401 slat (usec): min=8, max=124, avg=56.52, stdev=25.24 00:42:01.401 clat (usec): min=17356, max=34782, avg=32722.12, stdev=1050.02 00:42:01.401 lat (usec): min=17390, max=34804, avg=32778.64, stdev=1045.14 00:42:01.401 clat percentiles (usec): 00:42:01.401 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:42:01.401 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:42:01.401 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.401 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:42:01.401 | 99.99th=[34866] 00:42:01.401 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1926.40, stdev=28.62, samples=20 00:42:01.401 iops : min= 480, max= 512, avg=481.60, stdev= 7.16, samples=20 00:42:01.401 lat (msec) : 20=0.33%, 50=99.67% 00:42:01.401 cpu : usr=97.50%, sys=1.61%, ctx=70, majf=0, minf=25 00:42:01.401 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:01.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.401 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.401 issued rwts: total=4817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.401 filename0: (groupid=0, jobs=1): err= 0: pid=469624: Mon Nov 18 20:43:11 2024 00:42:01.401 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10033msec) 00:42:01.401 slat (usec): min=3, max=117, avg=35.59, stdev=26.61 00:42:01.401 clat (usec): min=1630, max=34814, avg=32455.33, stdev=3660.91 00:42:01.401 lat (usec): min=1637, max=34883, avg=32490.92, stdev=3662.48 00:42:01.401 clat percentiles (usec): 00:42:01.401 | 1.00th=[ 4424], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:42:01.401 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:42:01.401 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.401 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:42:01.401 | 99.99th=[34866] 00:42:01.401 bw ( KiB/s): min= 1920, max= 2480, per=4.25%, avg=1954.40, stdev=126.97, samples=20 00:42:01.401 iops : min= 480, max= 620, avg=488.60, stdev=31.74, samples=20 00:42:01.401 lat (msec) : 2=0.61%, 4=0.04%, 10=0.61%, 20=0.65%, 50=98.08% 00:42:01.401 cpu : usr=96.83%, sys=2.01%, ctx=207, majf=0, minf=34 00:42:01.401 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:01.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.401 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.401 issued rwts: total=4902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.401 filename0: (groupid=0, jobs=1): err= 0: pid=469626: Mon Nov 18 20:43:11 2024 00:42:01.401 read: IOPS=481, BW=1924KiB/s (1970kB/s)(18.8MiB/10011msec) 00:42:01.401 slat (nsec): min=3939, max=95087, avg=31977.32, stdev=12714.21 00:42:01.401 clat (usec): min=20153, max=45866, avg=32978.78, stdev=846.40 00:42:01.401 lat (usec): min=20165, max=45901, avg=33010.75, stdev=844.78 00:42:01.401 clat percentiles (usec): 00:42:01.401 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:42:01.401 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:42:01.401 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.401 | 99.00th=[34341], 99.50th=[34866], 99.90th=[38536], 99.95th=[45351], 00:42:01.401 | 99.99th=[45876] 00:42:01.401 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1926.74, stdev=29.37, samples=19 00:42:01.401 iops : min= 480, max= 512, avg=481.68, stdev= 7.34, samples=19 00:42:01.401 lat (msec) : 50=100.00% 00:42:01.401 cpu : usr=98.17%, sys=1.17%, ctx=63, majf=0, minf=40 00:42:01.401 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:01.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.401 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.401 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.401 filename0: (groupid=0, jobs=1): err= 0: pid=469627: Mon Nov 18 20:43:11 2024 00:42:01.401 read: IOPS=481, BW=1926KiB/s (1972kB/s)(18.8MiB/10004msec) 00:42:01.401 slat (usec): min=7, max=101, avg=33.36, stdev= 9.67 00:42:01.401 clat (usec): min=3793, max=66114, avg=32935.72, stdev=2745.06 00:42:01.401 lat (usec): min=3801, max=66178, avg=32969.08, stdev=2746.34 00:42:01.401 clat percentiles (usec): 00:42:01.401 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:42:01.401 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:42:01.401 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.401 | 99.00th=[34341], 99.50th=[34866], 99.90th=[65799], 99.95th=[65799], 00:42:01.401 | 99.99th=[66323] 00:42:01.401 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1913.26, stdev=67.11, samples=19 00:42:01.401 iops : min= 416, max= 512, avg=478.32, stdev=16.78, samples=19 00:42:01.401 lat (msec) : 4=0.33%, 20=0.33%, 50=99.00%, 100=0.33% 00:42:01.401 cpu : usr=98.54%, sys=1.07%, ctx=13, majf=0, minf=36 00:42:01.401 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:01.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.402 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.402 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.402 filename0: (groupid=0, jobs=1): err= 0: pid=469628: Mon Nov 18 20:43:11 2024 00:42:01.402 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10003msec) 00:42:01.402 slat (usec): min=10, max=110, avg=37.87, stdev=17.90 00:42:01.402 clat (usec): min=15036, max=67893, avg=32985.33, stdev=2281.60 00:42:01.402 lat (usec): min=15068, max=67933, avg=33023.20, stdev=2280.88 00:42:01.402 clat percentiles (usec): 00:42:01.402 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:42:01.402 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:42:01.402 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.402 | 99.00th=[34341], 99.50th=[34866], 99.90th=[67634], 99.95th=[67634], 00:42:01.402 | 99.99th=[67634] 00:42:01.402 bw ( KiB/s): min= 1667, max= 2048, per=4.16%, avg=1913.42, stdev=66.49, samples=19 00:42:01.402 iops : min= 416, max= 512, avg=478.32, stdev=16.78, samples=19 00:42:01.402 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:42:01.402 cpu : usr=97.80%, sys=1.51%, ctx=68, majf=0, minf=33 00:42:01.402 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:01.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.402 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.402 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.402 filename0: (groupid=0, jobs=1): err= 0: pid=469629: Mon Nov 18 20:43:11 2024 00:42:01.402 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10004msec) 00:42:01.402 slat (usec): min=4, max=130, avg=23.10, stdev=14.26 00:42:01.402 clat (usec): min=24184, max=58392, avg=33167.85, stdev=1579.43 00:42:01.402 lat (usec): min=24199, max=58408, avg=33190.95, stdev=1577.35 00:42:01.402 clat percentiles (usec): 00:42:01.402 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:42:01.402 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:42:01.402 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:42:01.402 | 99.00th=[34341], 99.50th=[34866], 99.90th=[58459], 99.95th=[58459], 00:42:01.402 | 99.99th=[58459] 00:42:01.402 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1913.42, stdev=51.41, samples=19 00:42:01.402 iops : min= 448, max= 512, avg=478.32, stdev=12.95, samples=19 00:42:01.402 lat (msec) : 50=99.67%, 100=0.33% 00:42:01.402 cpu : usr=98.49%, sys=1.12%, ctx=13, majf=0, minf=23 00:42:01.402 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:01.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.402 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.402 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.402 filename1: (groupid=0, jobs=1): err= 0: pid=469630: Mon Nov 18 20:43:11 2024 00:42:01.402 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.8MiB/10015msec) 00:42:01.402 slat (nsec): min=4694, max=71773, avg=32729.06, stdev=9616.75 00:42:01.402 clat (usec): min=15157, max=47152, avg=32994.52, stdev=1345.73 00:42:01.402 lat (usec): min=15173, max=47166, avg=33027.25, stdev=1345.10 00:42:01.402 clat percentiles (usec): 00:42:01.402 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:42:01.402 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:42:01.402 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.402 | 99.00th=[34341], 99.50th=[34866], 99.90th=[46924], 99.95th=[46924], 00:42:01.402 | 99.99th=[46924] 00:42:01.402 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1920.00, stdev=42.67, samples=19 00:42:01.402 iops : min= 448, max= 512, avg=480.00, stdev=10.67, samples=19 00:42:01.402 lat (msec) : 20=0.33%, 50=99.67% 00:42:01.402 cpu : usr=98.33%, sys=1.28%, ctx=15, majf=0, minf=22 00:42:01.402 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:01.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.402 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.402 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.402 filename1: (groupid=0, jobs=1): err= 0: pid=469631: Mon Nov 18 20:43:11 2024 00:42:01.402 read: IOPS=482, BW=1929KiB/s (1975kB/s)(18.9MiB/10019msec) 00:42:01.402 slat (usec): min=6, max=114, avg=25.47, stdev=18.06 00:42:01.402 clat (usec): min=16654, max=43292, avg=32980.60, stdev=1858.91 00:42:01.402 lat (usec): min=16668, max=43315, avg=33006.07, stdev=1857.96 00:42:01.402 clat percentiles (usec): 00:42:01.402 | 1.00th=[23987], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:42:01.402 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:42:01.402 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:42:01.402 | 99.00th=[41681], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:42:01.402 | 99.99th=[43254] 00:42:01.402 bw ( KiB/s): min= 1792, max= 2048, per=4.19%, avg=1926.40, stdev=50.44, samples=20 00:42:01.402 iops : min= 448, max= 512, avg=481.60, stdev=12.61, samples=20 00:42:01.402 lat (msec) : 20=0.48%, 50=99.52% 00:42:01.402 cpu : usr=98.18%, sys=1.43%, ctx=16, majf=0, minf=43 00:42:01.402 IO depths : 1=5.6%, 2=11.9%, 4=24.9%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:42:01.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.402 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.402 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.402 filename1: (groupid=0, jobs=1): err= 0: pid=469633: Mon Nov 18 20:43:11 2024 00:42:01.402 read: IOPS=481, BW=1926KiB/s (1972kB/s)(18.8MiB/10001msec) 00:42:01.402 slat (usec): min=8, max=147, avg=36.96, stdev=17.03 00:42:01.402 clat (usec): min=17427, max=34754, avg=32872.91, stdev=982.49 00:42:01.402 lat (usec): min=17450, max=34786, avg=32909.86, stdev=981.62 00:42:01.402 clat percentiles (usec): 00:42:01.402 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:42:01.402 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:42:01.402 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.402 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:42:01.402 | 99.99th=[34866] 00:42:01.402 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1926.74, stdev=29.37, samples=19 00:42:01.402 iops : min= 480, max= 512, avg=481.68, stdev= 7.34, samples=19 00:42:01.402 lat (msec) : 20=0.33%, 50=99.67% 00:42:01.402 cpu : usr=94.15%, sys=3.21%, ctx=1045, majf=0, minf=29 00:42:01.402 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:01.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.402 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.402 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.402 filename1: (groupid=0, jobs=1): err= 0: pid=469634: Mon Nov 18 20:43:11 2024 00:42:01.402 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10001msec) 00:42:01.402 slat (usec): min=4, max=121, avg=35.03, stdev=16.98 00:42:01.402 clat (usec): min=23821, max=56109, avg=33031.55, stdev=1477.75 00:42:01.402 lat (usec): min=23844, max=56123, avg=33066.58, stdev=1475.07 00:42:01.402 clat percentiles (usec): 00:42:01.402 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:42:01.402 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:42:01.402 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.402 | 99.00th=[34341], 99.50th=[34866], 99.90th=[55837], 99.95th=[55837], 00:42:01.402 | 99.99th=[56361] 00:42:01.402 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1920.00, stdev=42.67, samples=19 00:42:01.402 iops : min= 448, max= 512, avg=480.00, stdev=10.67, samples=19 00:42:01.402 lat (msec) : 50=99.67%, 100=0.33% 00:42:01.402 cpu : usr=98.31%, sys=1.27%, ctx=24, majf=0, minf=31 00:42:01.402 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:01.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.402 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.402 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.402 filename1: (groupid=0, jobs=1): err= 0: pid=469635: Mon Nov 18 20:43:11 2024 00:42:01.402 read: IOPS=481, BW=1927KiB/s (1974kB/s)(18.8MiB/10008msec) 00:42:01.402 slat (usec): min=4, max=128, avg=46.99, stdev=25.15 00:42:01.402 clat (usec): min=15723, max=73142, avg=32785.46, stdev=2347.13 00:42:01.402 lat (usec): min=15732, max=73155, avg=32832.44, stdev=2346.21 00:42:01.402 clat percentiles (usec): 00:42:01.402 | 1.00th=[24249], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:42:01.402 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:42:01.402 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.402 | 99.00th=[35914], 99.50th=[47449], 99.90th=[56361], 99.95th=[56361], 00:42:01.402 | 99.99th=[72877] 00:42:01.402 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1920.00, stdev=43.00, samples=19 00:42:01.402 iops : min= 448, max= 512, avg=480.00, stdev=10.75, samples=19 00:42:01.402 lat (msec) : 20=0.41%, 50=99.25%, 100=0.33% 00:42:01.403 cpu : usr=97.47%, sys=1.70%, ctx=102, majf=0, minf=33 00:42:01.403 IO depths : 1=5.3%, 2=11.3%, 4=24.2%, 8=51.8%, 16=7.3%, 32=0.0%, >=64=0.0% 00:42:01.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.403 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.403 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.403 filename1: (groupid=0, jobs=1): err= 0: pid=469636: Mon Nov 18 20:43:11 2024 00:42:01.403 read: IOPS=482, BW=1929KiB/s (1975kB/s)(18.9MiB/10008msec) 00:42:01.403 slat (usec): min=4, max=124, avg=54.37, stdev=28.24 00:42:01.403 clat (usec): min=17147, max=71608, avg=32718.61, stdev=2271.43 00:42:01.403 lat (usec): min=17238, max=71620, avg=32772.98, stdev=2265.31 00:42:01.403 clat percentiles (usec): 00:42:01.403 | 1.00th=[22676], 5.00th=[31851], 10.00th=[31851], 20.00th=[32375], 00:42:01.403 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:42:01.403 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.403 | 99.00th=[36963], 99.50th=[43254], 99.90th=[55837], 99.95th=[55837], 00:42:01.403 | 99.99th=[71828] 00:42:01.403 bw ( KiB/s): min= 1776, max= 2072, per=4.18%, avg=1924.21, stdev=59.17, samples=19 00:42:01.403 iops : min= 444, max= 518, avg=481.05, stdev=14.79, samples=19 00:42:01.403 lat (msec) : 20=0.29%, 50=99.38%, 100=0.33% 00:42:01.403 cpu : usr=98.38%, sys=1.15%, ctx=44, majf=0, minf=37 00:42:01.403 IO depths : 1=4.5%, 2=10.6%, 4=24.3%, 8=52.6%, 16=8.0%, 32=0.0%, >=64=0.0% 00:42:01.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.403 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.403 issued rwts: total=4826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.403 filename1: (groupid=0, jobs=1): err= 0: pid=469638: Mon Nov 18 20:43:11 2024 00:42:01.403 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10003msec) 00:42:01.403 slat (nsec): min=4238, max=73520, avg=28439.00, stdev=12934.56 00:42:01.403 clat (usec): min=23950, max=57819, avg=33122.51, stdev=1552.72 00:42:01.403 lat (usec): min=23986, max=57832, avg=33150.95, stdev=1550.56 00:42:01.403 clat percentiles (usec): 00:42:01.403 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:42:01.403 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:42:01.403 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:42:01.403 | 99.00th=[34341], 99.50th=[34866], 99.90th=[57934], 99.95th=[57934], 00:42:01.403 | 99.99th=[57934] 00:42:01.403 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1913.26, stdev=51.80, samples=19 00:42:01.403 iops : min= 448, max= 512, avg=478.32, stdev=12.95, samples=19 00:42:01.403 lat (msec) : 50=99.67%, 100=0.33% 00:42:01.403 cpu : usr=98.53%, sys=1.09%, ctx=13, majf=0, minf=33 00:42:01.403 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:01.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.403 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.403 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.403 filename1: (groupid=0, jobs=1): err= 0: pid=469639: Mon Nov 18 20:43:11 2024 00:42:01.403 read: IOPS=481, BW=1926KiB/s (1972kB/s)(18.8MiB/10001msec) 00:42:01.403 slat (nsec): min=11506, max=80935, avg=34903.76, stdev=9682.40 00:42:01.403 clat (usec): min=17397, max=34809, avg=32904.98, stdev=969.72 00:42:01.403 lat (usec): min=17430, max=34835, avg=32939.89, stdev=970.16 00:42:01.403 clat percentiles (usec): 00:42:01.403 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:42:01.403 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:42:01.403 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.403 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:42:01.403 | 99.99th=[34866] 00:42:01.403 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1926.74, stdev=29.37, samples=19 00:42:01.403 iops : min= 480, max= 512, avg=481.68, stdev= 7.34, samples=19 00:42:01.403 lat (msec) : 20=0.33%, 50=99.67% 00:42:01.403 cpu : usr=98.10%, sys=1.30%, ctx=65, majf=0, minf=29 00:42:01.403 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:01.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.403 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.403 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.403 filename2: (groupid=0, jobs=1): err= 0: pid=469641: Mon Nov 18 20:43:11 2024 00:42:01.403 read: IOPS=481, BW=1926KiB/s (1972kB/s)(18.8MiB/10003msec) 00:42:01.403 slat (usec): min=3, max=130, avg=45.59, stdev=23.69 00:42:01.403 clat (usec): min=20289, max=45589, avg=32845.78, stdev=1902.21 00:42:01.403 lat (usec): min=20301, max=45629, avg=32891.37, stdev=1902.18 00:42:01.403 clat percentiles (usec): 00:42:01.403 | 1.00th=[21890], 5.00th=[31851], 10.00th=[32113], 20.00th=[32637], 00:42:01.403 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:42:01.403 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.403 | 99.00th=[44303], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:42:01.403 | 99.99th=[45351] 00:42:01.403 bw ( KiB/s): min= 1904, max= 2048, per=4.19%, avg=1926.74, stdev=29.85, samples=19 00:42:01.403 iops : min= 476, max= 512, avg=481.68, stdev= 7.46, samples=19 00:42:01.403 lat (msec) : 50=100.00% 00:42:01.403 cpu : usr=97.84%, sys=1.51%, ctx=79, majf=0, minf=24 00:42:01.403 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:42:01.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.403 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.403 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.403 filename2: (groupid=0, jobs=1): err= 0: pid=469643: Mon Nov 18 20:43:11 2024 00:42:01.403 read: IOPS=484, BW=1938KiB/s (1985kB/s)(18.9MiB/10006msec) 00:42:01.403 slat (nsec): min=7565, max=78341, avg=31331.18, stdev=11527.19 00:42:01.403 clat (usec): min=6020, max=68856, avg=32718.12, stdev=3259.88 00:42:01.403 lat (usec): min=6029, max=68901, avg=32749.45, stdev=3262.18 00:42:01.403 clat percentiles (usec): 00:42:01.403 | 1.00th=[21365], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:42:01.403 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:42:01.403 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33424], 00:42:01.403 | 99.00th=[34866], 99.50th=[51119], 99.90th=[68682], 99.95th=[68682], 00:42:01.403 | 99.99th=[68682] 00:42:01.403 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1913.26, stdev=67.11, samples=19 00:42:01.403 iops : min= 416, max= 512, avg=478.32, stdev=16.78, samples=19 00:42:01.403 lat (msec) : 10=0.08%, 20=0.37%, 50=99.01%, 100=0.54% 00:42:01.403 cpu : usr=98.50%, sys=1.10%, ctx=20, majf=0, minf=36 00:42:01.403 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:42:01.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.403 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.403 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.403 filename2: (groupid=0, jobs=1): err= 0: pid=469644: Mon Nov 18 20:43:11 2024 00:42:01.403 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10009msec) 00:42:01.403 slat (usec): min=4, max=127, avg=35.43, stdev=18.22 00:42:01.403 clat (usec): min=10555, max=50240, avg=32944.63, stdev=1845.55 00:42:01.403 lat (usec): min=10564, max=50255, avg=32980.06, stdev=1844.49 00:42:01.403 clat percentiles (usec): 00:42:01.403 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:42:01.403 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:42:01.403 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.403 | 99.00th=[34341], 99.50th=[34866], 99.90th=[50070], 99.95th=[50070], 00:42:01.403 | 99.99th=[50070] 00:42:01.403 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1920.00, stdev=42.67, samples=19 00:42:01.403 iops : min= 448, max= 512, avg=480.00, stdev=10.67, samples=19 00:42:01.403 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:42:01.403 cpu : usr=98.44%, sys=1.16%, ctx=16, majf=0, minf=41 00:42:01.403 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:01.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.403 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.403 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.403 filename2: (groupid=0, jobs=1): err= 0: pid=469645: Mon Nov 18 20:43:11 2024 00:42:01.403 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10005msec) 00:42:01.403 slat (nsec): min=4126, max=76662, avg=31223.61, stdev=11382.09 00:42:01.403 clat (usec): min=12972, max=69855, avg=33094.37, stdev=1886.95 00:42:01.403 lat (usec): min=13000, max=69874, avg=33125.59, stdev=1885.70 00:42:01.403 clat percentiles (usec): 00:42:01.404 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:42:01.404 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:42:01.404 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.404 | 99.00th=[34341], 99.50th=[34866], 99.90th=[60556], 99.95th=[60556], 00:42:01.404 | 99.99th=[69731] 00:42:01.404 bw ( KiB/s): min= 1776, max= 2032, per=4.16%, avg=1913.26, stdev=50.12, samples=19 00:42:01.404 iops : min= 444, max= 508, avg=478.32, stdev=12.53, samples=19 00:42:01.404 lat (msec) : 20=0.08%, 50=99.58%, 100=0.33% 00:42:01.404 cpu : usr=97.38%, sys=1.75%, ctx=57, majf=0, minf=47 00:42:01.404 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:42:01.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.404 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.404 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.404 filename2: (groupid=0, jobs=1): err= 0: pid=469646: Mon Nov 18 20:43:11 2024 00:42:01.404 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10001msec) 00:42:01.404 slat (nsec): min=8054, max=68658, avg=31216.37, stdev=10252.30 00:42:01.404 clat (usec): min=22246, max=66946, avg=33055.21, stdev=1630.92 00:42:01.404 lat (usec): min=22257, max=66962, avg=33086.42, stdev=1630.25 00:42:01.404 clat percentiles (usec): 00:42:01.404 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:42:01.404 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:42:01.404 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.404 | 99.00th=[34341], 99.50th=[34866], 99.90th=[56361], 99.95th=[56361], 00:42:01.404 | 99.99th=[66847] 00:42:01.404 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1920.00, stdev=42.67, samples=19 00:42:01.404 iops : min= 448, max= 512, avg=480.00, stdev=10.67, samples=19 00:42:01.404 lat (msec) : 50=99.67%, 100=0.33% 00:42:01.404 cpu : usr=97.04%, sys=1.81%, ctx=256, majf=0, minf=21 00:42:01.404 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:01.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.404 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.404 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.404 filename2: (groupid=0, jobs=1): err= 0: pid=469647: Mon Nov 18 20:43:11 2024 00:42:01.404 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10054msec) 00:42:01.404 slat (nsec): min=8651, max=70119, avg=34349.96, stdev=9751.63 00:42:01.404 clat (usec): min=16752, max=53480, avg=32963.34, stdev=1244.98 00:42:01.404 lat (usec): min=16766, max=53500, avg=32997.69, stdev=1244.27 00:42:01.404 clat percentiles (usec): 00:42:01.404 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:42:01.404 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:42:01.404 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.404 | 99.00th=[34341], 99.50th=[34341], 99.90th=[53216], 99.95th=[53216], 00:42:01.404 | 99.99th=[53740] 00:42:01.404 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1926.40, stdev=28.62, samples=20 00:42:01.404 iops : min= 480, max= 512, avg=481.60, stdev= 7.16, samples=20 00:42:01.404 lat (msec) : 20=0.33%, 50=99.52%, 100=0.15% 00:42:01.404 cpu : usr=98.35%, sys=1.21%, ctx=16, majf=0, minf=36 00:42:01.404 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=49.9%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:01.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.404 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.404 issued rwts: total=4823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.404 filename2: (groupid=0, jobs=1): err= 0: pid=469648: Mon Nov 18 20:43:11 2024 00:42:01.404 read: IOPS=481, BW=1926KiB/s (1972kB/s)(18.8MiB/10001msec) 00:42:01.404 slat (usec): min=8, max=122, avg=38.56, stdev=17.22 00:42:01.404 clat (usec): min=16842, max=34764, avg=32868.33, stdev=982.02 00:42:01.404 lat (usec): min=16858, max=34783, avg=32906.89, stdev=981.37 00:42:01.404 clat percentiles (usec): 00:42:01.404 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:42:01.404 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:42:01.404 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.404 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:42:01.404 | 99.99th=[34866] 00:42:01.404 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1926.74, stdev=29.37, samples=19 00:42:01.404 iops : min= 480, max= 512, avg=481.68, stdev= 7.34, samples=19 00:42:01.404 lat (msec) : 20=0.33%, 50=99.67% 00:42:01.404 cpu : usr=97.38%, sys=1.64%, ctx=115, majf=0, minf=29 00:42:01.404 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:01.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.404 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.404 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.404 filename2: (groupid=0, jobs=1): err= 0: pid=469649: Mon Nov 18 20:43:11 2024 00:42:01.404 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10005msec) 00:42:01.404 slat (usec): min=8, max=120, avg=47.94, stdev=23.13 00:42:01.404 clat (usec): min=4399, max=66232, avg=32852.51, stdev=2453.47 00:42:01.404 lat (usec): min=4432, max=66288, avg=32900.45, stdev=2451.85 00:42:01.404 clat percentiles (usec): 00:42:01.404 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32637], 00:42:01.404 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:42:01.404 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:01.404 | 99.00th=[34341], 99.50th=[34866], 99.90th=[66323], 99.95th=[66323], 00:42:01.404 | 99.99th=[66323] 00:42:01.404 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1913.26, stdev=67.11, samples=19 00:42:01.404 iops : min= 416, max= 512, avg=478.32, stdev=16.78, samples=19 00:42:01.404 lat (msec) : 10=0.15%, 20=0.33%, 50=99.19%, 100=0.33% 00:42:01.404 cpu : usr=97.12%, sys=1.80%, ctx=122, majf=0, minf=25 00:42:01.404 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:01.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.404 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.404 issued rwts: total=4807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.404 00:42:01.404 Run status group 0 (all jobs): 00:42:01.404 READ: bw=44.9MiB/s (47.1MB/s), 1919KiB/s-1954KiB/s (1965kB/s-2001kB/s), io=452MiB (474MB), run=10001-10054msec 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.404 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.405 bdev_null0 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.405 [2024-11-18 20:43:12.162130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.405 bdev_null1 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:01.405 { 00:42:01.405 "params": { 00:42:01.405 "name": "Nvme$subsystem", 00:42:01.405 "trtype": "$TEST_TRANSPORT", 00:42:01.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:01.405 "adrfam": "ipv4", 00:42:01.405 "trsvcid": "$NVMF_PORT", 00:42:01.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:01.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:01.405 "hdgst": ${hdgst:-false}, 00:42:01.405 "ddgst": ${ddgst:-false} 00:42:01.405 }, 00:42:01.405 "method": "bdev_nvme_attach_controller" 00:42:01.405 } 00:42:01.405 EOF 00:42:01.405 )") 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:01.405 20:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:01.405 { 00:42:01.405 "params": { 00:42:01.405 "name": "Nvme$subsystem", 00:42:01.405 "trtype": "$TEST_TRANSPORT", 00:42:01.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:01.405 "adrfam": "ipv4", 00:42:01.405 "trsvcid": "$NVMF_PORT", 00:42:01.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:01.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:01.405 "hdgst": ${hdgst:-false}, 00:42:01.405 "ddgst": ${ddgst:-false} 00:42:01.405 }, 00:42:01.406 "method": "bdev_nvme_attach_controller" 00:42:01.406 } 00:42:01.406 EOF 00:42:01.406 )") 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:01.406 "params": { 00:42:01.406 "name": "Nvme0", 00:42:01.406 "trtype": "tcp", 00:42:01.406 "traddr": "10.0.0.2", 00:42:01.406 "adrfam": "ipv4", 00:42:01.406 "trsvcid": "4420", 00:42:01.406 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:01.406 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:01.406 "hdgst": false, 00:42:01.406 "ddgst": false 00:42:01.406 }, 00:42:01.406 "method": "bdev_nvme_attach_controller" 00:42:01.406 },{ 00:42:01.406 "params": { 00:42:01.406 "name": "Nvme1", 00:42:01.406 "trtype": "tcp", 00:42:01.406 "traddr": "10.0.0.2", 00:42:01.406 "adrfam": "ipv4", 00:42:01.406 "trsvcid": "4420", 00:42:01.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:01.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:01.406 "hdgst": false, 00:42:01.406 "ddgst": false 00:42:01.406 }, 00:42:01.406 "method": "bdev_nvme_attach_controller" 00:42:01.406 }' 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:01.406 20:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:01.406 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:01.406 ... 00:42:01.406 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:01.406 ... 00:42:01.406 fio-3.35 00:42:01.406 Starting 4 threads 00:42:06.666 00:42:06.666 filename0: (groupid=0, jobs=1): err= 0: pid=471022: Mon Nov 18 20:43:18 2024 00:42:06.666 read: IOPS=1893, BW=14.8MiB/s (15.5MB/s)(74.0MiB/5003msec) 00:42:06.666 slat (nsec): min=4277, max=73210, avg=12584.29, stdev=5143.89 00:42:06.666 clat (usec): min=861, max=7606, avg=4179.64, stdev=642.92 00:42:06.666 lat (usec): min=874, max=7620, avg=4192.23, stdev=642.85 00:42:06.666 clat percentiles (usec): 00:42:06.666 | 1.00th=[ 2245], 5.00th=[ 3294], 10.00th=[ 3523], 20.00th=[ 3818], 00:42:06.666 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:42:06.666 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5276], 00:42:06.666 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7439], 00:42:06.666 | 99.99th=[ 7635] 00:42:06.666 bw ( KiB/s): min=14752, max=15504, per=25.06%, avg=15152.00, stdev=207.79, samples=10 00:42:06.666 iops : min= 1844, max= 1938, avg=1894.00, stdev=25.97, samples=10 00:42:06.666 lat (usec) : 1000=0.04% 00:42:06.666 lat (msec) : 2=0.49%, 4=28.28%, 10=71.19% 00:42:06.666 cpu : usr=93.42%, sys=6.06%, ctx=7, majf=0, minf=0 00:42:06.666 IO depths : 1=0.3%, 2=12.7%, 4=59.6%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.666 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.666 issued rwts: total=9475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.666 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:06.666 filename0: (groupid=0, jobs=1): err= 0: pid=471023: Mon Nov 18 20:43:18 2024 00:42:06.666 read: IOPS=1859, BW=14.5MiB/s (15.2MB/s)(72.7MiB/5004msec) 00:42:06.666 slat (nsec): min=3943, max=67188, avg=13374.30, stdev=5841.18 00:42:06.666 clat (usec): min=713, max=7868, avg=4253.73, stdev=685.52 00:42:06.666 lat (usec): min=726, max=7875, avg=4267.10, stdev=685.31 00:42:06.666 clat percentiles (usec): 00:42:06.666 | 1.00th=[ 2442], 5.00th=[ 3392], 10.00th=[ 3621], 20.00th=[ 3916], 00:42:06.666 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:42:06.666 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 5014], 95.00th=[ 5473], 00:42:06.666 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7439], 99.95th=[ 7504], 00:42:06.666 | 99.99th=[ 7898] 00:42:06.666 bw ( KiB/s): min=14736, max=15104, per=24.61%, avg=14881.30, stdev=127.80, samples=10 00:42:06.666 iops : min= 1842, max= 1888, avg=1860.10, stdev=15.96, samples=10 00:42:06.666 lat (usec) : 750=0.01%, 1000=0.10% 00:42:06.666 lat (msec) : 2=0.53%, 4=23.37%, 10=76.00% 00:42:06.666 cpu : usr=92.96%, sys=6.30%, ctx=64, majf=0, minf=0 00:42:06.666 IO depths : 1=0.2%, 2=12.7%, 4=59.4%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.666 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.666 issued rwts: total=9307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.666 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:06.666 filename1: (groupid=0, jobs=1): err= 0: pid=471024: Mon Nov 18 20:43:18 2024 00:42:06.666 read: IOPS=1936, BW=15.1MiB/s (15.9MB/s)(75.7MiB/5005msec) 00:42:06.666 slat (nsec): min=4221, max=71824, avg=12230.23, stdev=5360.29 00:42:06.666 clat (usec): min=743, max=7702, avg=4089.15, stdev=603.92 00:42:06.666 lat (usec): min=756, max=7716, avg=4101.38, stdev=603.99 00:42:06.666 clat percentiles (usec): 00:42:06.666 | 1.00th=[ 2212], 5.00th=[ 3163], 10.00th=[ 3425], 20.00th=[ 3720], 00:42:06.666 | 30.00th=[ 3916], 40.00th=[ 4047], 50.00th=[ 4146], 60.00th=[ 4228], 00:42:06.666 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 5014], 00:42:06.666 | 99.00th=[ 6128], 99.50th=[ 6652], 99.90th=[ 7308], 99.95th=[ 7504], 00:42:06.666 | 99.99th=[ 7701] 00:42:06.666 bw ( KiB/s): min=15200, max=15632, per=25.63%, avg=15496.00, stdev=145.33, samples=10 00:42:06.666 iops : min= 1900, max= 1954, avg=1937.00, stdev=18.17, samples=10 00:42:06.666 lat (usec) : 750=0.01%, 1000=0.01% 00:42:06.666 lat (msec) : 2=0.52%, 4=34.32%, 10=65.15% 00:42:06.666 cpu : usr=93.05%, sys=6.43%, ctx=12, majf=0, minf=0 00:42:06.666 IO depths : 1=0.4%, 2=11.2%, 4=60.4%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.666 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.666 issued rwts: total=9692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.666 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:06.666 filename1: (groupid=0, jobs=1): err= 0: pid=471025: Mon Nov 18 20:43:18 2024 00:42:06.666 read: IOPS=1870, BW=14.6MiB/s (15.3MB/s)(73.1MiB/5002msec) 00:42:06.666 slat (nsec): min=3962, max=52512, avg=13023.27, stdev=5234.85 00:42:06.666 clat (usec): min=713, max=7773, avg=4232.84, stdev=693.78 00:42:06.666 lat (usec): min=726, max=7781, avg=4245.87, stdev=693.66 00:42:06.666 clat percentiles (usec): 00:42:06.666 | 1.00th=[ 2409], 5.00th=[ 3294], 10.00th=[ 3556], 20.00th=[ 3851], 00:42:06.666 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:42:06.666 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 4948], 95.00th=[ 5538], 00:42:06.666 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7439], 99.95th=[ 7701], 00:42:06.666 | 99.99th=[ 7767] 00:42:06.666 bw ( KiB/s): min=14384, max=15120, per=24.74%, avg=14959.80, stdev=209.97, samples=10 00:42:06.666 iops : min= 1798, max= 1890, avg=1869.90, stdev=26.20, samples=10 00:42:06.666 lat (usec) : 750=0.01%, 1000=0.05% 00:42:06.666 lat (msec) : 2=0.62%, 4=25.26%, 10=74.06% 00:42:06.666 cpu : usr=94.16%, sys=5.32%, ctx=6, majf=0, minf=9 00:42:06.666 IO depths : 1=0.1%, 2=11.8%, 4=59.7%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.666 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.666 issued rwts: total=9356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.666 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:06.666 00:42:06.666 Run status group 0 (all jobs): 00:42:06.666 READ: bw=59.0MiB/s (61.9MB/s), 14.5MiB/s-15.1MiB/s (15.2MB/s-15.9MB/s), io=296MiB (310MB), run=5002-5005msec 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.666 20:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:06.925 20:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.925 00:42:06.925 real 0m24.474s 00:42:06.925 user 4m32.640s 00:42:06.925 sys 0m6.722s 00:42:06.925 20:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:06.925 20:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:06.925 ************************************ 00:42:06.925 END TEST fio_dif_rand_params 00:42:06.925 ************************************ 00:42:06.925 20:43:18 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:42:06.925 20:43:18 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:06.925 20:43:18 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:06.925 20:43:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:06.925 ************************************ 00:42:06.925 START TEST fio_dif_digest 00:42:06.925 ************************************ 00:42:06.925 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:42:06.925 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:42:06.925 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:42:06.925 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:42:06.925 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:42:06.925 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:42:06.925 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:42:06.925 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:42:06.925 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:42:06.925 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:42:06.925 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:42:06.925 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.926 bdev_null0 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.926 [2024-11-18 20:43:18.750909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:06.926 { 00:42:06.926 "params": { 00:42:06.926 "name": "Nvme$subsystem", 00:42:06.926 "trtype": "$TEST_TRANSPORT", 00:42:06.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:06.926 "adrfam": "ipv4", 00:42:06.926 "trsvcid": "$NVMF_PORT", 00:42:06.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:06.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:06.926 "hdgst": ${hdgst:-false}, 00:42:06.926 "ddgst": ${ddgst:-false} 00:42:06.926 }, 00:42:06.926 "method": "bdev_nvme_attach_controller" 00:42:06.926 } 00:42:06.926 EOF 00:42:06.926 )") 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:06.926 "params": { 00:42:06.926 "name": "Nvme0", 00:42:06.926 "trtype": "tcp", 00:42:06.926 "traddr": "10.0.0.2", 00:42:06.926 "adrfam": "ipv4", 00:42:06.926 "trsvcid": "4420", 00:42:06.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:06.926 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:06.926 "hdgst": true, 00:42:06.926 "ddgst": true 00:42:06.926 }, 00:42:06.926 "method": "bdev_nvme_attach_controller" 00:42:06.926 }' 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:06.926 20:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:07.184 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:07.184 ... 00:42:07.184 fio-3.35 00:42:07.184 Starting 3 threads 00:42:19.385 00:42:19.385 filename0: (groupid=0, jobs=1): err= 0: pid=471780: Mon Nov 18 20:43:29 2024 00:42:19.385 read: IOPS=199, BW=25.0MiB/s (26.2MB/s)(251MiB/10046msec) 00:42:19.385 slat (nsec): min=3910, max=39987, avg=13905.24, stdev=3610.70 00:42:19.385 clat (usec): min=11854, max=54971, avg=14962.61, stdev=1497.01 00:42:19.385 lat (usec): min=11866, max=54983, avg=14976.51, stdev=1497.03 00:42:19.385 clat percentiles (usec): 00:42:19.385 | 1.00th=[12518], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 00:42:19.385 | 30.00th=[14484], 40.00th=[14746], 50.00th=[14877], 60.00th=[15139], 00:42:19.385 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16581], 00:42:19.385 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18482], 99.95th=[45876], 00:42:19.385 | 99.99th=[54789] 00:42:19.385 bw ( KiB/s): min=24832, max=26368, per=32.68%, avg=25689.60, stdev=400.70, samples=20 00:42:19.385 iops : min= 194, max= 206, avg=200.70, stdev= 3.13, samples=20 00:42:19.385 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:42:19.385 cpu : usr=93.72%, sys=5.78%, ctx=17, majf=0, minf=172 00:42:19.385 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:19.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.385 issued rwts: total=2009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.385 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:19.385 filename0: (groupid=0, jobs=1): err= 0: pid=471781: Mon Nov 18 20:43:29 2024 00:42:19.385 read: IOPS=210, BW=26.3MiB/s (27.5MB/s)(264MiB/10046msec) 00:42:19.385 slat (nsec): min=4431, max=83529, avg=15461.20, stdev=4841.86 00:42:19.385 clat (usec): min=10873, max=54141, avg=14236.92, stdev=1536.17 00:42:19.385 lat (usec): min=10893, max=54153, avg=14252.38, stdev=1535.94 00:42:19.385 clat percentiles (usec): 00:42:19.385 | 1.00th=[11994], 5.00th=[12649], 10.00th=[13042], 20.00th=[13435], 00:42:19.385 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:42:19.385 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15401], 95.00th=[15926], 00:42:19.385 | 99.00th=[16712], 99.50th=[17171], 99.90th=[20579], 99.95th=[49021], 00:42:19.385 | 99.99th=[54264] 00:42:19.386 bw ( KiB/s): min=26112, max=27648, per=34.34%, avg=26995.20, stdev=393.76, samples=20 00:42:19.386 iops : min= 204, max= 216, avg=210.90, stdev= 3.08, samples=20 00:42:19.386 lat (msec) : 20=99.81%, 50=0.14%, 100=0.05% 00:42:19.386 cpu : usr=92.41%, sys=7.07%, ctx=18, majf=0, minf=186 00:42:19.386 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:19.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.386 issued rwts: total=2111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.386 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:19.386 filename0: (groupid=0, jobs=1): err= 0: pid=471782: Mon Nov 18 20:43:29 2024 00:42:19.386 read: IOPS=204, BW=25.5MiB/s (26.7MB/s)(256MiB/10047msec) 00:42:19.386 slat (nsec): min=4061, max=41542, avg=14049.07, stdev=3695.30 00:42:19.386 clat (usec): min=10563, max=48154, avg=14665.55, stdev=1428.41 00:42:19.386 lat (usec): min=10577, max=48167, avg=14679.60, stdev=1428.35 00:42:19.386 clat percentiles (usec): 00:42:19.386 | 1.00th=[12387], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:42:19.386 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:42:19.386 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15926], 95.00th=[16450], 00:42:19.386 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18482], 99.95th=[46400], 00:42:19.386 | 99.99th=[47973] 00:42:19.386 bw ( KiB/s): min=25344, max=26880, per=33.34%, avg=26204.15, stdev=455.11, samples=20 00:42:19.386 iops : min= 198, max= 210, avg=204.70, stdev= 3.57, samples=20 00:42:19.386 lat (msec) : 20=99.90%, 50=0.10% 00:42:19.386 cpu : usr=92.85%, sys=6.64%, ctx=21, majf=0, minf=118 00:42:19.386 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:19.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.386 issued rwts: total=2050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.386 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:19.386 00:42:19.386 Run status group 0 (all jobs): 00:42:19.386 READ: bw=76.8MiB/s (80.5MB/s), 25.0MiB/s-26.3MiB/s (26.2MB/s-27.5MB/s), io=771MiB (809MB), run=10046-10047msec 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.386 00:42:19.386 real 0m11.200s 00:42:19.386 user 0m29.198s 00:42:19.386 sys 0m2.226s 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:19.386 20:43:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:19.386 ************************************ 00:42:19.386 END TEST fio_dif_digest 00:42:19.386 ************************************ 00:42:19.386 20:43:29 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:19.386 20:43:29 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:19.386 20:43:29 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:19.386 20:43:29 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:42:19.386 20:43:29 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:19.386 20:43:29 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:42:19.386 20:43:29 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:19.386 20:43:29 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:19.386 rmmod nvme_tcp 00:42:19.386 rmmod nvme_fabrics 00:42:19.386 rmmod nvme_keyring 00:42:19.386 20:43:29 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:19.386 20:43:29 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:42:19.386 20:43:29 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:42:19.386 20:43:29 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 465745 ']' 00:42:19.386 20:43:29 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 465745 00:42:19.386 20:43:29 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 465745 ']' 00:42:19.386 20:43:29 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 465745 00:42:19.386 20:43:29 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:42:19.386 20:43:30 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:19.386 20:43:30 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465745 00:42:19.386 20:43:30 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:19.386 20:43:30 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:19.386 20:43:30 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465745' 00:42:19.386 killing process with pid 465745 00:42:19.386 20:43:30 nvmf_dif -- common/autotest_common.sh@973 -- # kill 465745 00:42:19.386 20:43:30 nvmf_dif -- common/autotest_common.sh@978 -- # wait 465745 00:42:19.386 20:43:30 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:19.386 20:43:30 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:19.386 Waiting for block devices as requested 00:42:19.386 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:19.644 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:19.644 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:19.902 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:19.902 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:19.902 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:19.902 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:20.160 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:20.160 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:20.160 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:20.160 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:20.419 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:20.419 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:20.419 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:20.419 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:20.703 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:20.703 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:20.703 20:43:32 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:20.703 20:43:32 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:20.703 20:43:32 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:42:20.703 20:43:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:42:21.009 20:43:32 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:21.009 20:43:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:42:21.009 20:43:32 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:21.009 20:43:32 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:21.009 20:43:32 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:21.009 20:43:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:21.009 20:43:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:22.918 20:43:34 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:22.918 00:42:22.918 real 1m7.223s 00:42:22.918 user 6m29.430s 00:42:22.918 sys 0m18.302s 00:42:22.918 20:43:34 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:22.918 20:43:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:22.918 ************************************ 00:42:22.918 END TEST nvmf_dif 00:42:22.918 ************************************ 00:42:22.918 20:43:34 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:22.918 20:43:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:22.918 20:43:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:22.918 20:43:34 -- common/autotest_common.sh@10 -- # set +x 00:42:22.918 ************************************ 00:42:22.918 START TEST nvmf_abort_qd_sizes 00:42:22.918 ************************************ 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:22.918 * Looking for test storage... 00:42:22.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:22.918 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:23.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.177 --rc genhtml_branch_coverage=1 00:42:23.177 --rc genhtml_function_coverage=1 00:42:23.177 --rc genhtml_legend=1 00:42:23.177 --rc geninfo_all_blocks=1 00:42:23.177 --rc geninfo_unexecuted_blocks=1 00:42:23.177 00:42:23.177 ' 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:23.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.177 --rc genhtml_branch_coverage=1 00:42:23.177 --rc genhtml_function_coverage=1 00:42:23.177 --rc genhtml_legend=1 00:42:23.177 --rc geninfo_all_blocks=1 00:42:23.177 --rc geninfo_unexecuted_blocks=1 00:42:23.177 00:42:23.177 ' 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:23.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.177 --rc genhtml_branch_coverage=1 00:42:23.177 --rc genhtml_function_coverage=1 00:42:23.177 --rc genhtml_legend=1 00:42:23.177 --rc geninfo_all_blocks=1 00:42:23.177 --rc geninfo_unexecuted_blocks=1 00:42:23.177 00:42:23.177 ' 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:23.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.177 --rc genhtml_branch_coverage=1 00:42:23.177 --rc genhtml_function_coverage=1 00:42:23.177 --rc genhtml_legend=1 00:42:23.177 --rc geninfo_all_blocks=1 00:42:23.177 --rc geninfo_unexecuted_blocks=1 00:42:23.177 00:42:23.177 ' 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:23.177 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:23.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:42:23.178 20:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:25.712 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:25.713 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:25.713 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:25.713 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:25.713 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:25.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:25.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:42:25.713 00:42:25.713 --- 10.0.0.2 ping statistics --- 00:42:25.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:25.713 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:25.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:25.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:42:25.713 00:42:25.713 --- 10.0.0.1 ping statistics --- 00:42:25.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:25.713 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:25.713 20:43:37 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:26.651 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:26.651 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:26.651 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:26.651 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:26.651 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:26.651 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:26.651 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:26.651 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:26.651 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:26.651 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:26.651 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:26.651 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:26.651 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:26.651 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:26.651 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:26.651 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:27.591 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=476699 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 476699 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 476699 ']' 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:27.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:27.849 20:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:27.849 [2024-11-18 20:43:39.751379] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:42:27.849 [2024-11-18 20:43:39.751453] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:27.849 [2024-11-18 20:43:39.824678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:28.109 [2024-11-18 20:43:39.875524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:28.109 [2024-11-18 20:43:39.875573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:28.109 [2024-11-18 20:43:39.875587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:28.109 [2024-11-18 20:43:39.875598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:28.109 [2024-11-18 20:43:39.875607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:28.109 [2024-11-18 20:43:39.877144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:28.109 [2024-11-18 20:43:39.877209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:28.109 [2024-11-18 20:43:39.877277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:28.109 [2024-11-18 20:43:39.877280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:28.109 20:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:28.109 20:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:42:28.109 20:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:28.109 20:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:28.109 20:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:28.109 20:43:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:28.109 ************************************ 00:42:28.109 START TEST spdk_target_abort 00:42:28.109 ************************************ 00:42:28.109 20:43:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:42:28.109 20:43:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:28.109 20:43:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:42:28.109 20:43:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.109 20:43:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:31.400 spdk_targetn1 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:31.400 [2024-11-18 20:43:42.897254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:31.400 [2024-11-18 20:43:42.937560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:31.400 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:31.401 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:31.401 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:31.401 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:31.401 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:31.401 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:31.401 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:31.401 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:31.401 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:31.401 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:31.401 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:31.401 20:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:34.688 Initializing NVMe Controllers 00:42:34.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:34.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:34.688 Initialization complete. Launching workers. 00:42:34.688 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12828, failed: 0 00:42:34.688 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1191, failed to submit 11637 00:42:34.688 success 735, unsuccessful 456, failed 0 00:42:34.688 20:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:34.688 20:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:37.975 Initializing NVMe Controllers 00:42:37.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:37.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:37.975 Initialization complete. Launching workers. 00:42:37.975 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8510, failed: 0 00:42:37.975 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1267, failed to submit 7243 00:42:37.975 success 352, unsuccessful 915, failed 0 00:42:37.975 20:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:37.975 20:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:41.259 Initializing NVMe Controllers 00:42:41.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:41.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:41.259 Initialization complete. Launching workers. 00:42:41.259 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31155, failed: 0 00:42:41.259 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2744, failed to submit 28411 00:42:41.259 success 514, unsuccessful 2230, failed 0 00:42:41.259 20:43:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:41.259 20:43:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.259 20:43:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:41.259 20:43:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.259 20:43:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:41.259 20:43:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.259 20:43:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:42.196 20:43:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.196 20:43:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 476699 00:42:42.196 20:43:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 476699 ']' 00:42:42.196 20:43:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 476699 00:42:42.196 20:43:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:42:42.196 20:43:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:42.196 20:43:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 476699 00:42:42.196 20:43:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:42.196 20:43:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:42.196 20:43:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 476699' 00:42:42.196 killing process with pid 476699 00:42:42.196 20:43:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 476699 00:42:42.196 20:43:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 476699 00:42:42.453 00:42:42.453 real 0m14.276s 00:42:42.453 user 0m54.223s 00:42:42.453 sys 0m2.530s 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:42.453 ************************************ 00:42:42.453 END TEST spdk_target_abort 00:42:42.453 ************************************ 00:42:42.453 20:43:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:42.453 20:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:42.453 20:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:42.453 20:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:42.453 ************************************ 00:42:42.453 START TEST kernel_target_abort 00:42:42.453 ************************************ 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:42.453 20:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:43.828 Waiting for block devices as requested 00:42:43.828 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:43.828 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:44.087 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:44.087 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:44.087 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:44.348 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:44.348 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:44.348 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:44.348 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:44.348 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:44.607 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:44.607 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:44.607 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:44.866 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:44.866 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:44.866 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:44.866 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:45.126 20:43:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:42:45.126 20:43:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:45.126 20:43:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:42:45.126 20:43:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:42:45.126 20:43:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:45.126 20:43:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:42:45.126 20:43:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:42:45.126 20:43:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:45.126 20:43:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:45.126 No valid GPT data, bailing 00:42:45.126 20:43:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:45.126 00:42:45.126 Discovery Log Number of Records 2, Generation counter 2 00:42:45.126 =====Discovery Log Entry 0====== 00:42:45.126 trtype: tcp 00:42:45.126 adrfam: ipv4 00:42:45.126 subtype: current discovery subsystem 00:42:45.126 treq: not specified, sq flow control disable supported 00:42:45.126 portid: 1 00:42:45.126 trsvcid: 4420 00:42:45.126 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:45.126 traddr: 10.0.0.1 00:42:45.126 eflags: none 00:42:45.126 sectype: none 00:42:45.126 =====Discovery Log Entry 1====== 00:42:45.126 trtype: tcp 00:42:45.126 adrfam: ipv4 00:42:45.126 subtype: nvme subsystem 00:42:45.126 treq: not specified, sq flow control disable supported 00:42:45.126 portid: 1 00:42:45.126 trsvcid: 4420 00:42:45.126 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:45.126 traddr: 10.0.0.1 00:42:45.126 eflags: none 00:42:45.126 sectype: none 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:45.126 20:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:48.417 Initializing NVMe Controllers 00:42:48.417 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:48.417 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:48.417 Initialization complete. Launching workers. 00:42:48.417 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56588, failed: 0 00:42:48.417 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56588, failed to submit 0 00:42:48.417 success 0, unsuccessful 56588, failed 0 00:42:48.417 20:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:48.417 20:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:51.710 Initializing NVMe Controllers 00:42:51.710 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:51.710 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:51.710 Initialization complete. Launching workers. 00:42:51.710 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100026, failed: 0 00:42:51.710 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25214, failed to submit 74812 00:42:51.710 success 0, unsuccessful 25214, failed 0 00:42:51.710 20:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:51.710 20:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:55.041 Initializing NVMe Controllers 00:42:55.041 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:55.041 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:55.041 Initialization complete. Launching workers. 00:42:55.041 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97721, failed: 0 00:42:55.041 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24438, failed to submit 73283 00:42:55.041 success 0, unsuccessful 24438, failed 0 00:42:55.041 20:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:55.041 20:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:55.041 20:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:42:55.041 20:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:55.041 20:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:55.041 20:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:55.041 20:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:55.041 20:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:55.041 20:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:55.041 20:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:55.979 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:55.979 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:55.979 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:55.979 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:55.979 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:55.979 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:55.979 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:55.979 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:55.979 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:55.979 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:55.979 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:55.979 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:55.979 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:55.979 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:55.979 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:55.979 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:56.913 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:56.913 00:42:56.913 real 0m14.408s 00:42:56.913 user 0m6.667s 00:42:56.913 sys 0m3.277s 00:42:56.913 20:44:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:56.913 20:44:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:56.913 ************************************ 00:42:56.913 END TEST kernel_target_abort 00:42:56.913 ************************************ 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:56.913 rmmod nvme_tcp 00:42:56.913 rmmod nvme_fabrics 00:42:56.913 rmmod nvme_keyring 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 476699 ']' 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 476699 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 476699 ']' 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 476699 00:42:56.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (476699) - No such process 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 476699 is not found' 00:42:56.913 Process with pid 476699 is not found 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:56.913 20:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:58.290 Waiting for block devices as requested 00:42:58.290 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:58.290 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:58.550 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:58.550 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:58.550 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:58.550 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:58.809 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:58.809 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:58.809 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:58.809 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:59.067 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:59.067 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:59.067 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:59.067 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:59.325 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:59.325 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:59.325 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:59.584 20:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:59.584 20:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:59.584 20:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:59.584 20:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:42:59.584 20:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:59.584 20:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:42:59.584 20:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:59.584 20:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:59.584 20:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:59.584 20:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:59.584 20:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:01.491 20:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:01.491 00:43:01.491 real 0m38.615s 00:43:01.491 user 1m3.222s 00:43:01.491 sys 0m9.586s 00:43:01.491 20:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:01.491 20:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:01.491 ************************************ 00:43:01.491 END TEST nvmf_abort_qd_sizes 00:43:01.491 ************************************ 00:43:01.491 20:44:13 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:01.491 20:44:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:01.491 20:44:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:01.491 20:44:13 -- common/autotest_common.sh@10 -- # set +x 00:43:01.491 ************************************ 00:43:01.491 START TEST keyring_file 00:43:01.491 ************************************ 00:43:01.491 20:44:13 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:01.751 * Looking for test storage... 00:43:01.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:01.751 20:44:13 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:01.751 20:44:13 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:43:01.751 20:44:13 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:01.751 20:44:13 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@345 -- # : 1 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@353 -- # local d=1 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@355 -- # echo 1 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@353 -- # local d=2 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@355 -- # echo 2 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@368 -- # return 0 00:43:01.751 20:44:13 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:01.751 20:44:13 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:01.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.751 --rc genhtml_branch_coverage=1 00:43:01.751 --rc genhtml_function_coverage=1 00:43:01.751 --rc genhtml_legend=1 00:43:01.751 --rc geninfo_all_blocks=1 00:43:01.751 --rc geninfo_unexecuted_blocks=1 00:43:01.751 00:43:01.751 ' 00:43:01.751 20:44:13 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:01.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.751 --rc genhtml_branch_coverage=1 00:43:01.751 --rc genhtml_function_coverage=1 00:43:01.751 --rc genhtml_legend=1 00:43:01.751 --rc geninfo_all_blocks=1 00:43:01.751 --rc geninfo_unexecuted_blocks=1 00:43:01.751 00:43:01.751 ' 00:43:01.751 20:44:13 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:01.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.751 --rc genhtml_branch_coverage=1 00:43:01.751 --rc genhtml_function_coverage=1 00:43:01.751 --rc genhtml_legend=1 00:43:01.751 --rc geninfo_all_blocks=1 00:43:01.751 --rc geninfo_unexecuted_blocks=1 00:43:01.751 00:43:01.751 ' 00:43:01.751 20:44:13 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:01.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.751 --rc genhtml_branch_coverage=1 00:43:01.751 --rc genhtml_function_coverage=1 00:43:01.751 --rc genhtml_legend=1 00:43:01.751 --rc geninfo_all_blocks=1 00:43:01.751 --rc geninfo_unexecuted_blocks=1 00:43:01.751 00:43:01.751 ' 00:43:01.751 20:44:13 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:01.751 20:44:13 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:01.751 20:44:13 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:01.751 20:44:13 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:01.751 20:44:13 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.751 20:44:13 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.752 20:44:13 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.752 20:44:13 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:01.752 20:44:13 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@51 -- # : 0 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:01.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:01.752 20:44:13 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:01.752 20:44:13 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:01.752 20:44:13 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:01.752 20:44:13 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:01.752 20:44:13 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:01.752 20:44:13 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XwXuMrsmUE 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XwXuMrsmUE 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XwXuMrsmUE 00:43:01.752 20:44:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.XwXuMrsmUE 00:43:01.752 20:44:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.F0VGMg4EUE 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:01.752 20:44:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.F0VGMg4EUE 00:43:01.752 20:44:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.F0VGMg4EUE 00:43:01.752 20:44:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.F0VGMg4EUE 00:43:01.752 20:44:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=482456 00:43:01.752 20:44:13 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:01.752 20:44:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 482456 00:43:01.752 20:44:13 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 482456 ']' 00:43:01.752 20:44:13 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:01.752 20:44:13 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:01.752 20:44:13 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:01.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:01.752 20:44:13 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:01.752 20:44:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:01.752 [2024-11-18 20:44:13.744108] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:43:01.752 [2024-11-18 20:44:13.744192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482456 ] 00:43:02.012 [2024-11-18 20:44:13.812156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:02.012 [2024-11-18 20:44:13.861334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:02.348 20:44:14 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:02.348 [2024-11-18 20:44:14.131817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:02.348 null0 00:43:02.348 [2024-11-18 20:44:14.163860] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:02.348 [2024-11-18 20:44:14.164362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.348 20:44:14 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:02.348 [2024-11-18 20:44:14.191909] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:43:02.348 request: 00:43:02.348 { 00:43:02.348 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:43:02.348 "secure_channel": false, 00:43:02.348 "listen_address": { 00:43:02.348 "trtype": "tcp", 00:43:02.348 "traddr": "127.0.0.1", 00:43:02.348 "trsvcid": "4420" 00:43:02.348 }, 00:43:02.348 "method": "nvmf_subsystem_add_listener", 00:43:02.348 "req_id": 1 00:43:02.348 } 00:43:02.348 Got JSON-RPC error response 00:43:02.348 response: 00:43:02.348 { 00:43:02.348 "code": -32602, 00:43:02.348 "message": "Invalid parameters" 00:43:02.348 } 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:02.348 20:44:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:02.349 20:44:14 keyring_file -- keyring/file.sh@47 -- # bperfpid=482473 00:43:02.349 20:44:14 keyring_file -- keyring/file.sh@49 -- # waitforlisten 482473 /var/tmp/bperf.sock 00:43:02.349 20:44:14 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 482473 ']' 00:43:02.349 20:44:14 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:02.349 20:44:14 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:43:02.349 20:44:14 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:02.349 20:44:14 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:02.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:02.349 20:44:14 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:02.349 20:44:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:02.349 [2024-11-18 20:44:14.242823] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:43:02.349 [2024-11-18 20:44:14.242915] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482473 ] 00:43:02.668 [2024-11-18 20:44:14.311169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:02.668 [2024-11-18 20:44:14.356192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:02.668 20:44:14 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:02.668 20:44:14 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:02.668 20:44:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XwXuMrsmUE 00:43:02.668 20:44:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XwXuMrsmUE 00:43:02.926 20:44:14 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.F0VGMg4EUE 00:43:02.926 20:44:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.F0VGMg4EUE 00:43:03.185 20:44:15 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:43:03.185 20:44:15 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:43:03.185 20:44:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:03.185 20:44:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:03.185 20:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:03.444 20:44:15 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.XwXuMrsmUE == \/\t\m\p\/\t\m\p\.\X\w\X\u\M\r\s\m\U\E ]] 00:43:03.444 20:44:15 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:43:03.444 20:44:15 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:43:03.444 20:44:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:03.444 20:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:03.444 20:44:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:03.702 20:44:15 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.F0VGMg4EUE == \/\t\m\p\/\t\m\p\.\F\0\V\G\M\g\4\E\U\E ]] 00:43:03.702 20:44:15 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:43:03.702 20:44:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:03.702 20:44:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:03.702 20:44:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:03.702 20:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:03.702 20:44:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:03.961 20:44:15 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:03.961 20:44:15 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:43:03.961 20:44:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:03.961 20:44:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:03.961 20:44:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:03.961 20:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:03.961 20:44:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:04.220 20:44:16 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:43:04.220 20:44:16 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:04.220 20:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:04.478 [2024-11-18 20:44:16.375204] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:04.478 nvme0n1 00:43:04.478 20:44:16 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:43:04.478 20:44:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:04.478 20:44:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:04.478 20:44:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:04.478 20:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:04.478 20:44:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:04.737 20:44:16 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:43:04.737 20:44:16 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:43:04.737 20:44:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:04.737 20:44:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:04.737 20:44:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:04.737 20:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:04.737 20:44:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:05.308 20:44:17 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:43:05.308 20:44:17 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:05.308 Running I/O for 1 seconds... 00:43:06.248 10216.00 IOPS, 39.91 MiB/s 00:43:06.248 Latency(us) 00:43:06.248 [2024-11-18T19:44:18.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:06.248 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:43:06.248 nvme0n1 : 1.01 10267.76 40.11 0.00 0.00 12428.65 5315.70 23107.51 00:43:06.248 [2024-11-18T19:44:18.256Z] =================================================================================================================== 00:43:06.248 [2024-11-18T19:44:18.256Z] Total : 10267.76 40.11 0.00 0.00 12428.65 5315.70 23107.51 00:43:06.248 { 00:43:06.248 "results": [ 00:43:06.248 { 00:43:06.248 "job": "nvme0n1", 00:43:06.248 "core_mask": "0x2", 00:43:06.248 "workload": "randrw", 00:43:06.248 "percentage": 50, 00:43:06.248 "status": "finished", 00:43:06.248 "queue_depth": 128, 00:43:06.248 "io_size": 4096, 00:43:06.248 "runtime": 1.007523, 00:43:06.248 "iops": 10267.755674064016, 00:43:06.248 "mibps": 40.10842060181256, 00:43:06.248 "io_failed": 0, 00:43:06.248 "io_timeout": 0, 00:43:06.248 "avg_latency_us": 12428.645537547212, 00:43:06.248 "min_latency_us": 5315.697777777777, 00:43:06.248 "max_latency_us": 23107.508148148147 00:43:06.248 } 00:43:06.248 ], 00:43:06.248 "core_count": 1 00:43:06.248 } 00:43:06.248 20:44:18 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:06.248 20:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:06.507 20:44:18 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:43:06.507 20:44:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:06.507 20:44:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:06.507 20:44:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:06.507 20:44:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:06.507 20:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:06.767 20:44:18 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:43:06.767 20:44:18 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:43:06.767 20:44:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:06.767 20:44:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:06.767 20:44:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:06.767 20:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:06.767 20:44:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:07.026 20:44:18 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:43:07.026 20:44:18 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:07.026 20:44:18 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:07.027 20:44:18 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:07.027 20:44:18 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:07.027 20:44:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:07.027 20:44:18 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:07.027 20:44:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:07.027 20:44:18 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:07.027 20:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:07.287 [2024-11-18 20:44:19.234613] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:07.287 [2024-11-18 20:44:19.235254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18edb70 (107): Transport endpoint is not connected 00:43:07.287 [2024-11-18 20:44:19.236245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18edb70 (9): Bad file descriptor 00:43:07.287 [2024-11-18 20:44:19.237244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:07.287 [2024-11-18 20:44:19.237264] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:07.287 [2024-11-18 20:44:19.237278] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:07.287 [2024-11-18 20:44:19.237294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:07.287 request: 00:43:07.287 { 00:43:07.287 "name": "nvme0", 00:43:07.287 "trtype": "tcp", 00:43:07.287 "traddr": "127.0.0.1", 00:43:07.287 "adrfam": "ipv4", 00:43:07.287 "trsvcid": "4420", 00:43:07.287 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:07.287 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:07.287 "prchk_reftag": false, 00:43:07.287 "prchk_guard": false, 00:43:07.287 "hdgst": false, 00:43:07.287 "ddgst": false, 00:43:07.287 "psk": "key1", 00:43:07.287 "allow_unrecognized_csi": false, 00:43:07.287 "method": "bdev_nvme_attach_controller", 00:43:07.287 "req_id": 1 00:43:07.287 } 00:43:07.287 Got JSON-RPC error response 00:43:07.287 response: 00:43:07.287 { 00:43:07.287 "code": -5, 00:43:07.287 "message": "Input/output error" 00:43:07.287 } 00:43:07.287 20:44:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:07.287 20:44:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:07.287 20:44:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:07.287 20:44:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:07.287 20:44:19 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:43:07.287 20:44:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:07.287 20:44:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:07.287 20:44:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:07.287 20:44:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:07.287 20:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:07.546 20:44:19 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:43:07.546 20:44:19 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:43:07.546 20:44:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:07.546 20:44:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:07.546 20:44:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:07.546 20:44:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:07.546 20:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:07.806 20:44:19 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:43:07.806 20:44:19 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:43:07.806 20:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:08.067 20:44:20 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:43:08.067 20:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:43:08.637 20:44:20 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:43:08.637 20:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:08.637 20:44:20 keyring_file -- keyring/file.sh@78 -- # jq length 00:43:08.637 20:44:20 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:43:08.637 20:44:20 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.XwXuMrsmUE 00:43:08.637 20:44:20 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.XwXuMrsmUE 00:43:08.637 20:44:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:08.637 20:44:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.XwXuMrsmUE 00:43:08.637 20:44:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:08.637 20:44:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:08.637 20:44:20 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:08.637 20:44:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:08.637 20:44:20 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XwXuMrsmUE 00:43:08.637 20:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XwXuMrsmUE 00:43:08.896 [2024-11-18 20:44:20.891242] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XwXuMrsmUE': 0100660 00:43:08.896 [2024-11-18 20:44:20.891280] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:43:08.896 request: 00:43:08.896 { 00:43:08.896 "name": "key0", 00:43:08.896 "path": "/tmp/tmp.XwXuMrsmUE", 00:43:08.896 "method": "keyring_file_add_key", 00:43:08.896 "req_id": 1 00:43:08.896 } 00:43:08.896 Got JSON-RPC error response 00:43:08.896 response: 00:43:08.896 { 00:43:08.896 "code": -1, 00:43:08.896 "message": "Operation not permitted" 00:43:08.896 } 00:43:09.155 20:44:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:09.155 20:44:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:09.155 20:44:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:09.155 20:44:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:09.155 20:44:20 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.XwXuMrsmUE 00:43:09.155 20:44:20 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XwXuMrsmUE 00:43:09.155 20:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XwXuMrsmUE 00:43:09.413 20:44:21 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.XwXuMrsmUE 00:43:09.413 20:44:21 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:43:09.414 20:44:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:09.414 20:44:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:09.414 20:44:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:09.414 20:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.414 20:44:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:09.672 20:44:21 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:43:09.672 20:44:21 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:09.672 20:44:21 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:09.672 20:44:21 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:09.672 20:44:21 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:09.672 20:44:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:09.672 20:44:21 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:09.672 20:44:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:09.672 20:44:21 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:09.672 20:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:09.930 [2024-11-18 20:44:21.729550] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.XwXuMrsmUE': No such file or directory 00:43:09.930 [2024-11-18 20:44:21.729588] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:43:09.930 [2024-11-18 20:44:21.729635] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:43:09.930 [2024-11-18 20:44:21.729659] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:43:09.930 [2024-11-18 20:44:21.729673] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:09.930 [2024-11-18 20:44:21.729685] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:43:09.930 request: 00:43:09.930 { 00:43:09.930 "name": "nvme0", 00:43:09.930 "trtype": "tcp", 00:43:09.930 "traddr": "127.0.0.1", 00:43:09.930 "adrfam": "ipv4", 00:43:09.930 "trsvcid": "4420", 00:43:09.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:09.930 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:09.930 "prchk_reftag": false, 00:43:09.930 "prchk_guard": false, 00:43:09.930 "hdgst": false, 00:43:09.930 "ddgst": false, 00:43:09.930 "psk": "key0", 00:43:09.930 "allow_unrecognized_csi": false, 00:43:09.930 "method": "bdev_nvme_attach_controller", 00:43:09.930 "req_id": 1 00:43:09.930 } 00:43:09.930 Got JSON-RPC error response 00:43:09.930 response: 00:43:09.930 { 00:43:09.930 "code": -19, 00:43:09.930 "message": "No such device" 00:43:09.930 } 00:43:09.930 20:44:21 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:09.930 20:44:21 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:09.930 20:44:21 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:09.930 20:44:21 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:09.930 20:44:21 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:43:09.930 20:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:10.188 20:44:22 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:10.188 20:44:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:10.188 20:44:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:10.188 20:44:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:10.188 20:44:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:10.188 20:44:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:10.188 20:44:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UJMbq8Q3tI 00:43:10.188 20:44:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:10.188 20:44:22 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:10.188 20:44:22 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:10.188 20:44:22 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:10.188 20:44:22 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:10.188 20:44:22 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:10.188 20:44:22 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:10.188 20:44:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UJMbq8Q3tI 00:43:10.188 20:44:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UJMbq8Q3tI 00:43:10.188 20:44:22 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.UJMbq8Q3tI 00:43:10.188 20:44:22 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UJMbq8Q3tI 00:43:10.188 20:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UJMbq8Q3tI 00:43:10.446 20:44:22 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:10.446 20:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:10.704 nvme0n1 00:43:10.704 20:44:22 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:43:10.704 20:44:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:10.704 20:44:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:10.704 20:44:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:10.704 20:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:10.704 20:44:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:10.963 20:44:22 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:43:10.963 20:44:22 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:43:10.963 20:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:11.222 20:44:23 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:43:11.222 20:44:23 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:43:11.222 20:44:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:11.222 20:44:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:11.222 20:44:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:11.788 20:44:23 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:43:11.788 20:44:23 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:43:11.788 20:44:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:11.788 20:44:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:11.788 20:44:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:11.788 20:44:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:11.788 20:44:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:11.788 20:44:23 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:43:11.788 20:44:23 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:11.788 20:44:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:12.047 20:44:24 keyring_file -- keyring/file.sh@105 -- # jq length 00:43:12.047 20:44:24 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:43:12.047 20:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:12.615 20:44:24 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:43:12.615 20:44:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UJMbq8Q3tI 00:43:12.615 20:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UJMbq8Q3tI 00:43:12.615 20:44:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.F0VGMg4EUE 00:43:12.615 20:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.F0VGMg4EUE 00:43:12.874 20:44:24 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:12.874 20:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:13.440 nvme0n1 00:43:13.440 20:44:25 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:43:13.440 20:44:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:13.699 20:44:25 keyring_file -- keyring/file.sh@113 -- # config='{ 00:43:13.699 "subsystems": [ 00:43:13.699 { 00:43:13.699 "subsystem": "keyring", 00:43:13.699 "config": [ 00:43:13.699 { 00:43:13.699 "method": "keyring_file_add_key", 00:43:13.699 "params": { 00:43:13.699 "name": "key0", 00:43:13.699 "path": "/tmp/tmp.UJMbq8Q3tI" 00:43:13.699 } 00:43:13.699 }, 00:43:13.699 { 00:43:13.699 "method": "keyring_file_add_key", 00:43:13.699 "params": { 00:43:13.699 "name": "key1", 00:43:13.699 "path": "/tmp/tmp.F0VGMg4EUE" 00:43:13.699 } 00:43:13.699 } 00:43:13.699 ] 00:43:13.699 }, 00:43:13.699 { 00:43:13.699 "subsystem": "iobuf", 00:43:13.699 "config": [ 00:43:13.699 { 00:43:13.699 "method": "iobuf_set_options", 00:43:13.699 "params": { 00:43:13.699 "small_pool_count": 8192, 00:43:13.699 "large_pool_count": 1024, 00:43:13.699 "small_bufsize": 8192, 00:43:13.699 "large_bufsize": 135168, 00:43:13.699 "enable_numa": false 00:43:13.699 } 00:43:13.699 } 00:43:13.699 ] 00:43:13.699 }, 00:43:13.699 { 00:43:13.699 "subsystem": "sock", 00:43:13.699 "config": [ 00:43:13.699 { 00:43:13.699 "method": "sock_set_default_impl", 00:43:13.699 "params": { 00:43:13.699 "impl_name": "posix" 00:43:13.699 } 00:43:13.699 }, 00:43:13.699 { 00:43:13.700 "method": "sock_impl_set_options", 00:43:13.700 "params": { 00:43:13.700 "impl_name": "ssl", 00:43:13.700 "recv_buf_size": 4096, 00:43:13.700 "send_buf_size": 4096, 00:43:13.700 "enable_recv_pipe": true, 00:43:13.700 "enable_quickack": false, 00:43:13.700 "enable_placement_id": 0, 00:43:13.700 "enable_zerocopy_send_server": true, 00:43:13.700 "enable_zerocopy_send_client": false, 00:43:13.700 "zerocopy_threshold": 0, 00:43:13.700 "tls_version": 0, 00:43:13.700 "enable_ktls": false 00:43:13.700 } 00:43:13.700 }, 00:43:13.700 { 00:43:13.700 "method": "sock_impl_set_options", 00:43:13.700 "params": { 00:43:13.700 "impl_name": "posix", 00:43:13.700 "recv_buf_size": 2097152, 00:43:13.700 "send_buf_size": 2097152, 00:43:13.700 "enable_recv_pipe": true, 00:43:13.700 "enable_quickack": false, 00:43:13.700 "enable_placement_id": 0, 00:43:13.700 "enable_zerocopy_send_server": true, 00:43:13.700 "enable_zerocopy_send_client": false, 00:43:13.700 "zerocopy_threshold": 0, 00:43:13.700 "tls_version": 0, 00:43:13.700 "enable_ktls": false 00:43:13.700 } 00:43:13.700 } 00:43:13.700 ] 00:43:13.700 }, 00:43:13.700 { 00:43:13.700 "subsystem": "vmd", 00:43:13.700 "config": [] 00:43:13.700 }, 00:43:13.700 { 00:43:13.700 "subsystem": "accel", 00:43:13.700 "config": [ 00:43:13.700 { 00:43:13.700 "method": "accel_set_options", 00:43:13.700 "params": { 00:43:13.700 "small_cache_size": 128, 00:43:13.700 "large_cache_size": 16, 00:43:13.700 "task_count": 2048, 00:43:13.700 "sequence_count": 2048, 00:43:13.700 "buf_count": 2048 00:43:13.700 } 00:43:13.700 } 00:43:13.700 ] 00:43:13.700 }, 00:43:13.700 { 00:43:13.700 "subsystem": "bdev", 00:43:13.700 "config": [ 00:43:13.700 { 00:43:13.700 "method": "bdev_set_options", 00:43:13.700 "params": { 00:43:13.700 "bdev_io_pool_size": 65535, 00:43:13.700 "bdev_io_cache_size": 256, 00:43:13.700 "bdev_auto_examine": true, 00:43:13.700 "iobuf_small_cache_size": 128, 00:43:13.700 "iobuf_large_cache_size": 16 00:43:13.700 } 00:43:13.700 }, 00:43:13.700 { 00:43:13.700 "method": "bdev_raid_set_options", 00:43:13.700 "params": { 00:43:13.700 "process_window_size_kb": 1024, 00:43:13.700 "process_max_bandwidth_mb_sec": 0 00:43:13.700 } 00:43:13.700 }, 00:43:13.700 { 00:43:13.700 "method": "bdev_iscsi_set_options", 00:43:13.700 "params": { 00:43:13.700 "timeout_sec": 30 00:43:13.700 } 00:43:13.700 }, 00:43:13.700 { 00:43:13.700 "method": "bdev_nvme_set_options", 00:43:13.700 "params": { 00:43:13.700 "action_on_timeout": "none", 00:43:13.700 "timeout_us": 0, 00:43:13.700 "timeout_admin_us": 0, 00:43:13.700 "keep_alive_timeout_ms": 10000, 00:43:13.700 "arbitration_burst": 0, 00:43:13.700 "low_priority_weight": 0, 00:43:13.700 "medium_priority_weight": 0, 00:43:13.700 "high_priority_weight": 0, 00:43:13.700 "nvme_adminq_poll_period_us": 10000, 00:43:13.700 "nvme_ioq_poll_period_us": 0, 00:43:13.700 "io_queue_requests": 512, 00:43:13.700 "delay_cmd_submit": true, 00:43:13.700 "transport_retry_count": 4, 00:43:13.700 "bdev_retry_count": 3, 00:43:13.700 "transport_ack_timeout": 0, 00:43:13.700 "ctrlr_loss_timeout_sec": 0, 00:43:13.700 "reconnect_delay_sec": 0, 00:43:13.700 "fast_io_fail_timeout_sec": 0, 00:43:13.700 "disable_auto_failback": false, 00:43:13.700 "generate_uuids": false, 00:43:13.700 "transport_tos": 0, 00:43:13.700 "nvme_error_stat": false, 00:43:13.700 "rdma_srq_size": 0, 00:43:13.700 "io_path_stat": false, 00:43:13.700 "allow_accel_sequence": false, 00:43:13.700 "rdma_max_cq_size": 0, 00:43:13.700 "rdma_cm_event_timeout_ms": 0, 00:43:13.700 "dhchap_digests": [ 00:43:13.700 "sha256", 00:43:13.700 "sha384", 00:43:13.700 "sha512" 00:43:13.700 ], 00:43:13.700 "dhchap_dhgroups": [ 00:43:13.700 "null", 00:43:13.700 "ffdhe2048", 00:43:13.700 "ffdhe3072", 00:43:13.700 "ffdhe4096", 00:43:13.700 "ffdhe6144", 00:43:13.700 "ffdhe8192" 00:43:13.700 ] 00:43:13.700 } 00:43:13.700 }, 00:43:13.700 { 00:43:13.700 "method": "bdev_nvme_attach_controller", 00:43:13.700 "params": { 00:43:13.700 "name": "nvme0", 00:43:13.700 "trtype": "TCP", 00:43:13.700 "adrfam": "IPv4", 00:43:13.700 "traddr": "127.0.0.1", 00:43:13.700 "trsvcid": "4420", 00:43:13.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:13.700 "prchk_reftag": false, 00:43:13.700 "prchk_guard": false, 00:43:13.700 "ctrlr_loss_timeout_sec": 0, 00:43:13.700 "reconnect_delay_sec": 0, 00:43:13.700 "fast_io_fail_timeout_sec": 0, 00:43:13.700 "psk": "key0", 00:43:13.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:13.700 "hdgst": false, 00:43:13.700 "ddgst": false, 00:43:13.700 "multipath": "multipath" 00:43:13.700 } 00:43:13.700 }, 00:43:13.700 { 00:43:13.700 "method": "bdev_nvme_set_hotplug", 00:43:13.700 "params": { 00:43:13.700 "period_us": 100000, 00:43:13.700 "enable": false 00:43:13.700 } 00:43:13.700 }, 00:43:13.700 { 00:43:13.700 "method": "bdev_wait_for_examine" 00:43:13.700 } 00:43:13.700 ] 00:43:13.700 }, 00:43:13.700 { 00:43:13.700 "subsystem": "nbd", 00:43:13.700 "config": [] 00:43:13.700 } 00:43:13.700 ] 00:43:13.700 }' 00:43:13.700 20:44:25 keyring_file -- keyring/file.sh@115 -- # killprocess 482473 00:43:13.700 20:44:25 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 482473 ']' 00:43:13.700 20:44:25 keyring_file -- common/autotest_common.sh@958 -- # kill -0 482473 00:43:13.700 20:44:25 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:13.700 20:44:25 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:13.700 20:44:25 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482473 00:43:13.700 20:44:25 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:13.700 20:44:25 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:13.701 20:44:25 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482473' 00:43:13.701 killing process with pid 482473 00:43:13.701 20:44:25 keyring_file -- common/autotest_common.sh@973 -- # kill 482473 00:43:13.701 Received shutdown signal, test time was about 1.000000 seconds 00:43:13.701 00:43:13.701 Latency(us) 00:43:13.701 [2024-11-18T19:44:25.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:13.701 [2024-11-18T19:44:25.709Z] =================================================================================================================== 00:43:13.701 [2024-11-18T19:44:25.709Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:13.701 20:44:25 keyring_file -- common/autotest_common.sh@978 -- # wait 482473 00:43:13.976 20:44:25 keyring_file -- keyring/file.sh@118 -- # bperfpid=483935 00:43:13.976 20:44:25 keyring_file -- keyring/file.sh@120 -- # waitforlisten 483935 /var/tmp/bperf.sock 00:43:13.976 20:44:25 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 483935 ']' 00:43:13.976 20:44:25 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:13.977 20:44:25 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:13.977 20:44:25 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:13.977 20:44:25 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:13.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:13.977 20:44:25 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:43:13.977 "subsystems": [ 00:43:13.977 { 00:43:13.977 "subsystem": "keyring", 00:43:13.977 "config": [ 00:43:13.977 { 00:43:13.977 "method": "keyring_file_add_key", 00:43:13.977 "params": { 00:43:13.977 "name": "key0", 00:43:13.977 "path": "/tmp/tmp.UJMbq8Q3tI" 00:43:13.977 } 00:43:13.977 }, 00:43:13.977 { 00:43:13.977 "method": "keyring_file_add_key", 00:43:13.977 "params": { 00:43:13.977 "name": "key1", 00:43:13.977 "path": "/tmp/tmp.F0VGMg4EUE" 00:43:13.977 } 00:43:13.977 } 00:43:13.977 ] 00:43:13.977 }, 00:43:13.977 { 00:43:13.977 "subsystem": "iobuf", 00:43:13.977 "config": [ 00:43:13.977 { 00:43:13.977 "method": "iobuf_set_options", 00:43:13.977 "params": { 00:43:13.977 "small_pool_count": 8192, 00:43:13.977 "large_pool_count": 1024, 00:43:13.977 "small_bufsize": 8192, 00:43:13.977 "large_bufsize": 135168, 00:43:13.977 "enable_numa": false 00:43:13.977 } 00:43:13.977 } 00:43:13.977 ] 00:43:13.977 }, 00:43:13.977 { 00:43:13.977 "subsystem": "sock", 00:43:13.977 "config": [ 00:43:13.977 { 00:43:13.977 "method": "sock_set_default_impl", 00:43:13.977 "params": { 00:43:13.977 "impl_name": "posix" 00:43:13.977 } 00:43:13.977 }, 00:43:13.977 { 00:43:13.977 "method": "sock_impl_set_options", 00:43:13.977 "params": { 00:43:13.977 "impl_name": "ssl", 00:43:13.977 "recv_buf_size": 4096, 00:43:13.977 "send_buf_size": 4096, 00:43:13.977 "enable_recv_pipe": true, 00:43:13.977 "enable_quickack": false, 00:43:13.977 "enable_placement_id": 0, 00:43:13.977 "enable_zerocopy_send_server": true, 00:43:13.977 "enable_zerocopy_send_client": false, 00:43:13.977 "zerocopy_threshold": 0, 00:43:13.977 "tls_version": 0, 00:43:13.977 "enable_ktls": false 00:43:13.977 } 00:43:13.977 }, 00:43:13.977 { 00:43:13.977 "method": "sock_impl_set_options", 00:43:13.977 "params": { 00:43:13.977 "impl_name": "posix", 00:43:13.977 "recv_buf_size": 2097152, 00:43:13.977 "send_buf_size": 2097152, 00:43:13.977 "enable_recv_pipe": true, 00:43:13.977 "enable_quickack": false, 00:43:13.977 "enable_placement_id": 0, 00:43:13.977 "enable_zerocopy_send_server": true, 00:43:13.977 "enable_zerocopy_send_client": false, 00:43:13.977 "zerocopy_threshold": 0, 00:43:13.977 "tls_version": 0, 00:43:13.977 "enable_ktls": false 00:43:13.977 } 00:43:13.977 } 00:43:13.977 ] 00:43:13.977 }, 00:43:13.977 { 00:43:13.977 "subsystem": "vmd", 00:43:13.977 "config": [] 00:43:13.977 }, 00:43:13.977 { 00:43:13.977 "subsystem": "accel", 00:43:13.977 "config": [ 00:43:13.977 { 00:43:13.977 "method": "accel_set_options", 00:43:13.977 "params": { 00:43:13.977 "small_cache_size": 128, 00:43:13.977 "large_cache_size": 16, 00:43:13.977 "task_count": 2048, 00:43:13.977 "sequence_count": 2048, 00:43:13.977 "buf_count": 2048 00:43:13.977 } 00:43:13.977 } 00:43:13.977 ] 00:43:13.977 }, 00:43:13.977 { 00:43:13.977 "subsystem": "bdev", 00:43:13.977 "config": [ 00:43:13.977 { 00:43:13.977 "method": "bdev_set_options", 00:43:13.977 "params": { 00:43:13.977 "bdev_io_pool_size": 65535, 00:43:13.977 "bdev_io_cache_size": 256, 00:43:13.977 "bdev_auto_examine": true, 00:43:13.977 "iobuf_small_cache_size": 128, 00:43:13.977 "iobuf_large_cache_size": 16 00:43:13.977 } 00:43:13.977 }, 00:43:13.977 { 00:43:13.977 "method": "bdev_raid_set_options", 00:43:13.977 "params": { 00:43:13.977 "process_window_size_kb": 1024, 00:43:13.977 "process_max_bandwidth_mb_sec": 0 00:43:13.977 } 00:43:13.977 }, 00:43:13.977 { 00:43:13.977 "method": "bdev_iscsi_set_options", 00:43:13.977 "params": { 00:43:13.977 "timeout_sec": 30 00:43:13.977 } 00:43:13.977 }, 00:43:13.977 { 00:43:13.977 "method": "bdev_nvme_set_options", 00:43:13.977 "params": { 00:43:13.977 "action_on_timeout": "none", 00:43:13.977 "timeout_us": 0, 00:43:13.977 "timeout_admin_us": 0, 00:43:13.977 "keep_alive_timeout_ms": 10000, 00:43:13.977 "arbitration_burst": 0, 00:43:13.977 "low_priority_weight": 0, 00:43:13.977 "medium_priority_weight": 0, 00:43:13.977 "high_priority_weight": 0, 00:43:13.977 "nvme_adminq_poll_period_us": 10000, 00:43:13.977 "nvme_ioq_poll_period_us": 0, 00:43:13.977 "io_queue_requests": 512, 00:43:13.977 "delay_cmd_submit": true, 00:43:13.977 "transport_retry_count": 4, 00:43:13.977 "bdev_retry_count": 3, 00:43:13.977 "transport_ack_timeout": 0, 00:43:13.977 "ctrlr_loss_timeout_sec": 0, 00:43:13.977 "reconnect_delay_sec": 0, 00:43:13.977 "fast_io_fail_timeout_sec": 0, 00:43:13.977 "disable_auto_failback": false, 00:43:13.977 "generate_uuids": false, 00:43:13.977 "transport_tos": 0, 00:43:13.977 "nvme_error_stat": false, 00:43:13.977 "rdma_srq_size": 0, 00:43:13.977 "io_path_stat": false, 00:43:13.977 "allow_accel_sequence": false, 00:43:13.977 "rdma_max_cq_size": 0, 00:43:13.977 "rdma_cm_event_timeout_ms": 0, 00:43:13.977 "dhchap_digests": [ 00:43:13.977 "sha256", 00:43:13.977 "sha384", 00:43:13.977 "sha512" 00:43:13.977 ], 00:43:13.978 "dhchap_dhgroups": [ 00:43:13.978 "null", 00:43:13.978 "ffdhe2048", 00:43:13.978 "ffdhe3072", 00:43:13.978 "ffdhe4096", 00:43:13.978 "ffdhe6144", 00:43:13.978 "ffdhe8192" 00:43:13.978 ] 00:43:13.978 } 00:43:13.978 }, 00:43:13.978 { 00:43:13.978 "method": "bdev_nvme_attach_controller", 00:43:13.978 "params": { 00:43:13.978 "name": "nvme0", 00:43:13.978 "trtype": "TCP", 00:43:13.978 "adrfam": "IPv4", 00:43:13.978 "traddr": "127.0.0.1", 00:43:13.978 "trsvcid": "4420", 00:43:13.978 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:13.978 "prchk_reftag": false, 00:43:13.978 "prchk_guard": false, 00:43:13.978 "ctrlr_loss_timeout_sec": 0, 00:43:13.978 "reconnect_delay_sec": 0, 00:43:13.978 "fast_io_fail_timeout_sec": 0, 00:43:13.978 "psk": "key0", 00:43:13.978 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:13.978 "hdgst": false, 00:43:13.978 "ddgst": false, 00:43:13.978 "multipath": "multipath" 00:43:13.978 } 00:43:13.978 }, 00:43:13.978 { 00:43:13.978 "method": "bdev_nvme_set_hotplug", 00:43:13.978 "params": { 00:43:13.978 "period_us": 100000, 00:43:13.978 "enable": false 00:43:13.978 } 00:43:13.978 }, 00:43:13.978 { 00:43:13.978 "method": "bdev_wait_for_examine" 00:43:13.978 } 00:43:13.978 ] 00:43:13.978 }, 00:43:13.978 { 00:43:13.978 "subsystem": "nbd", 00:43:13.978 "config": [] 00:43:13.978 } 00:43:13.978 ] 00:43:13.978 }' 00:43:13.978 20:44:25 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:13.978 20:44:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:13.978 [2024-11-18 20:44:25.772067] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:43:13.978 [2024-11-18 20:44:25.772137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483935 ] 00:43:13.978 [2024-11-18 20:44:25.838344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:13.978 [2024-11-18 20:44:25.891093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:14.236 [2024-11-18 20:44:26.068135] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:14.236 20:44:26 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:14.236 20:44:26 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:14.236 20:44:26 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:14.236 20:44:26 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:14.236 20:44:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:14.495 20:44:26 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:14.495 20:44:26 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:14.495 20:44:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:14.495 20:44:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:14.495 20:44:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:14.495 20:44:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:14.495 20:44:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:14.754 20:44:26 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:14.754 20:44:26 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:14.754 20:44:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:14.754 20:44:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:14.754 20:44:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:14.754 20:44:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:14.754 20:44:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:15.013 20:44:26 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:15.013 20:44:26 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:15.013 20:44:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:15.013 20:44:26 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:15.579 20:44:27 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:15.579 20:44:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:15.579 20:44:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.UJMbq8Q3tI /tmp/tmp.F0VGMg4EUE 00:43:15.579 20:44:27 keyring_file -- keyring/file.sh@20 -- # killprocess 483935 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 483935 ']' 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@958 -- # kill -0 483935 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483935 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483935' 00:43:15.579 killing process with pid 483935 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@973 -- # kill 483935 00:43:15.579 Received shutdown signal, test time was about 1.000000 seconds 00:43:15.579 00:43:15.579 Latency(us) 00:43:15.579 [2024-11-18T19:44:27.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:15.579 [2024-11-18T19:44:27.587Z] =================================================================================================================== 00:43:15.579 [2024-11-18T19:44:27.587Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@978 -- # wait 483935 00:43:15.579 20:44:27 keyring_file -- keyring/file.sh@21 -- # killprocess 482456 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 482456 ']' 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@958 -- # kill -0 482456 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482456 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482456' 00:43:15.579 killing process with pid 482456 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@973 -- # kill 482456 00:43:15.579 20:44:27 keyring_file -- common/autotest_common.sh@978 -- # wait 482456 00:43:16.147 00:43:16.147 real 0m14.414s 00:43:16.147 user 0m36.903s 00:43:16.147 sys 0m3.224s 00:43:16.147 20:44:27 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:16.147 20:44:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:16.147 ************************************ 00:43:16.147 END TEST keyring_file 00:43:16.147 ************************************ 00:43:16.147 20:44:27 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:43:16.147 20:44:27 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:16.147 20:44:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:16.147 20:44:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:16.147 20:44:27 -- common/autotest_common.sh@10 -- # set +x 00:43:16.147 ************************************ 00:43:16.147 START TEST keyring_linux 00:43:16.147 ************************************ 00:43:16.147 20:44:27 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:16.147 Joined session keyring: 421077713 00:43:16.147 * Looking for test storage... 00:43:16.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:16.147 20:44:27 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:16.147 20:44:27 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:43:16.147 20:44:27 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:16.147 20:44:28 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:16.147 20:44:28 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:16.147 20:44:28 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:16.147 20:44:28 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:16.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.147 --rc genhtml_branch_coverage=1 00:43:16.147 --rc genhtml_function_coverage=1 00:43:16.147 --rc genhtml_legend=1 00:43:16.147 --rc geninfo_all_blocks=1 00:43:16.147 --rc geninfo_unexecuted_blocks=1 00:43:16.147 00:43:16.147 ' 00:43:16.147 20:44:28 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:16.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.147 --rc genhtml_branch_coverage=1 00:43:16.147 --rc genhtml_function_coverage=1 00:43:16.147 --rc genhtml_legend=1 00:43:16.147 --rc geninfo_all_blocks=1 00:43:16.147 --rc geninfo_unexecuted_blocks=1 00:43:16.147 00:43:16.147 ' 00:43:16.147 20:44:28 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:16.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.147 --rc genhtml_branch_coverage=1 00:43:16.147 --rc genhtml_function_coverage=1 00:43:16.147 --rc genhtml_legend=1 00:43:16.147 --rc geninfo_all_blocks=1 00:43:16.147 --rc geninfo_unexecuted_blocks=1 00:43:16.147 00:43:16.147 ' 00:43:16.147 20:44:28 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:16.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.147 --rc genhtml_branch_coverage=1 00:43:16.147 --rc genhtml_function_coverage=1 00:43:16.147 --rc genhtml_legend=1 00:43:16.147 --rc geninfo_all_blocks=1 00:43:16.147 --rc geninfo_unexecuted_blocks=1 00:43:16.147 00:43:16.147 ' 00:43:16.147 20:44:28 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:16.148 20:44:28 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:16.148 20:44:28 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:16.148 20:44:28 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:16.148 20:44:28 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:16.148 20:44:28 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.148 20:44:28 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.148 20:44:28 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.148 20:44:28 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:16.148 20:44:28 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:16.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:16.148 20:44:28 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:16.148 20:44:28 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:16.148 20:44:28 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:16.148 20:44:28 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:16.148 20:44:28 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:16.148 20:44:28 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:16.148 /tmp/:spdk-test:key0 00:43:16.148 20:44:28 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:16.148 20:44:28 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:16.148 20:44:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:16.148 /tmp/:spdk-test:key1 00:43:16.148 20:44:28 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=484342 00:43:16.148 20:44:28 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:16.148 20:44:28 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 484342 00:43:16.148 20:44:28 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 484342 ']' 00:43:16.148 20:44:28 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:16.148 20:44:28 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:16.148 20:44:28 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:16.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:16.148 20:44:28 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:16.148 20:44:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:16.409 [2024-11-18 20:44:28.198132] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:43:16.409 [2024-11-18 20:44:28.198234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484342 ] 00:43:16.409 [2024-11-18 20:44:28.266433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:16.409 [2024-11-18 20:44:28.312464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:16.667 20:44:28 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:16.667 20:44:28 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:16.667 20:44:28 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:16.667 20:44:28 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.667 20:44:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:16.667 [2024-11-18 20:44:28.564754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:16.667 null0 00:43:16.667 [2024-11-18 20:44:28.596797] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:16.667 [2024-11-18 20:44:28.597280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:16.667 20:44:28 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.667 20:44:28 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:16.667 762117530 00:43:16.667 20:44:28 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:16.667 983053117 00:43:16.668 20:44:28 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=484424 00:43:16.668 20:44:28 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:16.668 20:44:28 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 484424 /var/tmp/bperf.sock 00:43:16.668 20:44:28 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 484424 ']' 00:43:16.668 20:44:28 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:16.668 20:44:28 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:16.668 20:44:28 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:16.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:16.668 20:44:28 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:16.668 20:44:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:16.668 [2024-11-18 20:44:28.662980] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:43:16.668 [2024-11-18 20:44:28.663044] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484424 ] 00:43:16.926 [2024-11-18 20:44:28.727549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:16.926 [2024-11-18 20:44:28.772356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:16.926 20:44:28 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:16.926 20:44:28 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:16.926 20:44:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:16.926 20:44:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:17.184 20:44:29 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:17.184 20:44:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:17.749 20:44:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:17.749 20:44:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:18.007 [2024-11-18 20:44:29.766074] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:18.007 nvme0n1 00:43:18.007 20:44:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:18.007 20:44:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:18.007 20:44:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:18.007 20:44:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:18.007 20:44:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:18.007 20:44:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:18.265 20:44:30 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:18.265 20:44:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:18.265 20:44:30 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:18.265 20:44:30 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:18.265 20:44:30 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:18.265 20:44:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:18.265 20:44:30 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:18.523 20:44:30 keyring_linux -- keyring/linux.sh@25 -- # sn=762117530 00:43:18.523 20:44:30 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:18.523 20:44:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:18.523 20:44:30 keyring_linux -- keyring/linux.sh@26 -- # [[ 762117530 == \7\6\2\1\1\7\5\3\0 ]] 00:43:18.523 20:44:30 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 762117530 00:43:18.523 20:44:30 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:18.523 20:44:30 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:18.523 Running I/O for 1 seconds... 00:43:19.901 9837.00 IOPS, 38.43 MiB/s 00:43:19.901 Latency(us) 00:43:19.901 [2024-11-18T19:44:31.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:19.901 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:19.901 nvme0n1 : 1.01 9838.73 38.43 0.00 0.00 12921.45 10291.58 21262.79 00:43:19.901 [2024-11-18T19:44:31.909Z] =================================================================================================================== 00:43:19.901 [2024-11-18T19:44:31.909Z] Total : 9838.73 38.43 0.00 0.00 12921.45 10291.58 21262.79 00:43:19.901 { 00:43:19.901 "results": [ 00:43:19.901 { 00:43:19.901 "job": "nvme0n1", 00:43:19.901 "core_mask": "0x2", 00:43:19.901 "workload": "randread", 00:43:19.901 "status": "finished", 00:43:19.901 "queue_depth": 128, 00:43:19.901 "io_size": 4096, 00:43:19.901 "runtime": 1.012834, 00:43:19.901 "iops": 9838.729742484948, 00:43:19.901 "mibps": 38.432538056581826, 00:43:19.901 "io_failed": 0, 00:43:19.901 "io_timeout": 0, 00:43:19.901 "avg_latency_us": 12921.44707364665, 00:43:19.901 "min_latency_us": 10291.579259259259, 00:43:19.901 "max_latency_us": 21262.79111111111 00:43:19.901 } 00:43:19.901 ], 00:43:19.901 "core_count": 1 00:43:19.901 } 00:43:19.901 20:44:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:19.901 20:44:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:19.901 20:44:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:19.901 20:44:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:19.901 20:44:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:19.901 20:44:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:19.901 20:44:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:19.901 20:44:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:20.159 20:44:32 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:20.159 20:44:32 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:20.159 20:44:32 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:20.159 20:44:32 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:20.159 20:44:32 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:43:20.159 20:44:32 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:20.159 20:44:32 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:20.159 20:44:32 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:20.159 20:44:32 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:20.159 20:44:32 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:20.159 20:44:32 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:20.159 20:44:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:20.417 [2024-11-18 20:44:32.370907] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:20.417 [2024-11-18 20:44:32.371430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x678900 (107): Transport endpoint is not connected 00:43:20.417 [2024-11-18 20:44:32.372424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x678900 (9): Bad file descriptor 00:43:20.417 [2024-11-18 20:44:32.373423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:20.417 [2024-11-18 20:44:32.373442] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:20.417 [2024-11-18 20:44:32.373464] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:20.417 [2024-11-18 20:44:32.373478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:20.417 request: 00:43:20.417 { 00:43:20.417 "name": "nvme0", 00:43:20.417 "trtype": "tcp", 00:43:20.417 "traddr": "127.0.0.1", 00:43:20.417 "adrfam": "ipv4", 00:43:20.417 "trsvcid": "4420", 00:43:20.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:20.417 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:20.417 "prchk_reftag": false, 00:43:20.417 "prchk_guard": false, 00:43:20.417 "hdgst": false, 00:43:20.417 "ddgst": false, 00:43:20.417 "psk": ":spdk-test:key1", 00:43:20.417 "allow_unrecognized_csi": false, 00:43:20.417 "method": "bdev_nvme_attach_controller", 00:43:20.417 "req_id": 1 00:43:20.417 } 00:43:20.417 Got JSON-RPC error response 00:43:20.417 response: 00:43:20.417 { 00:43:20.417 "code": -5, 00:43:20.417 "message": "Input/output error" 00:43:20.417 } 00:43:20.417 20:44:32 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:43:20.417 20:44:32 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:20.417 20:44:32 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:20.417 20:44:32 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:20.417 20:44:32 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@33 -- # sn=762117530 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 762117530 00:43:20.418 1 links removed 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@33 -- # sn=983053117 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 983053117 00:43:20.418 1 links removed 00:43:20.418 20:44:32 keyring_linux -- keyring/linux.sh@41 -- # killprocess 484424 00:43:20.418 20:44:32 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 484424 ']' 00:43:20.418 20:44:32 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 484424 00:43:20.418 20:44:32 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:20.418 20:44:32 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:20.418 20:44:32 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484424 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484424' 00:43:20.676 killing process with pid 484424 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 484424 00:43:20.676 Received shutdown signal, test time was about 1.000000 seconds 00:43:20.676 00:43:20.676 Latency(us) 00:43:20.676 [2024-11-18T19:44:32.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:20.676 [2024-11-18T19:44:32.684Z] =================================================================================================================== 00:43:20.676 [2024-11-18T19:44:32.684Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 484424 00:43:20.676 20:44:32 keyring_linux -- keyring/linux.sh@42 -- # killprocess 484342 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 484342 ']' 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 484342 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484342 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484342' 00:43:20.676 killing process with pid 484342 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 484342 00:43:20.676 20:44:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 484342 00:43:21.243 00:43:21.243 real 0m5.064s 00:43:21.243 user 0m10.267s 00:43:21.243 sys 0m1.542s 00:43:21.243 20:44:32 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:21.243 20:44:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:21.243 ************************************ 00:43:21.243 END TEST keyring_linux 00:43:21.243 ************************************ 00:43:21.243 20:44:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:21.243 20:44:33 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:21.243 20:44:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:43:21.243 20:44:33 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:43:21.243 20:44:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:43:21.243 20:44:33 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:21.243 20:44:33 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:21.243 20:44:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:21.243 20:44:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:43:21.243 20:44:33 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:21.243 20:44:33 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:43:21.243 20:44:33 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:21.243 20:44:33 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:21.243 20:44:33 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:43:21.243 20:44:33 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:43:21.243 20:44:33 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:43:21.243 20:44:33 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:43:21.243 20:44:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:21.243 20:44:33 -- common/autotest_common.sh@10 -- # set +x 00:43:21.243 20:44:33 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:43:21.243 20:44:33 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:43:21.243 20:44:33 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:43:21.243 20:44:33 -- common/autotest_common.sh@10 -- # set +x 00:43:23.147 INFO: APP EXITING 00:43:23.147 INFO: killing all VMs 00:43:23.147 INFO: killing vhost app 00:43:23.147 INFO: EXIT DONE 00:43:24.084 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:43:24.084 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:43:24.084 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:43:24.084 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:43:24.084 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:43:24.084 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:43:24.084 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:43:24.343 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:43:24.343 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:43:24.343 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:43:24.343 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:43:24.343 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:43:24.343 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:43:24.343 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:43:24.343 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:43:24.343 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:43:24.343 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:43:25.718 Cleaning 00:43:25.718 Removing: /var/run/dpdk/spdk0/config 00:43:25.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:25.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:25.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:25.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:25.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:25.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:25.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:25.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:25.719 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:25.719 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:25.719 Removing: /var/run/dpdk/spdk1/config 00:43:25.719 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:25.719 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:25.719 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:25.719 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:25.719 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:25.719 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:25.719 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:25.719 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:25.719 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:25.719 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:25.719 Removing: /var/run/dpdk/spdk2/config 00:43:25.719 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:25.719 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:25.719 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:25.719 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:25.719 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:25.719 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:25.719 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:25.719 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:25.719 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:25.719 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:25.719 Removing: /var/run/dpdk/spdk3/config 00:43:25.719 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:25.719 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:25.719 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:25.719 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:25.719 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:25.719 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:25.719 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:25.719 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:25.719 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:25.719 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:25.719 Removing: /var/run/dpdk/spdk4/config 00:43:25.719 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:25.719 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:25.719 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:25.719 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:25.719 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:25.719 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:25.719 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:25.719 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:25.719 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:25.719 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:25.719 Removing: /dev/shm/bdev_svc_trace.1 00:43:25.719 Removing: /dev/shm/nvmf_trace.0 00:43:25.719 Removing: /dev/shm/spdk_tgt_trace.pid99129 00:43:25.719 Removing: /var/run/dpdk/spdk0 00:43:25.719 Removing: /var/run/dpdk/spdk1 00:43:25.719 Removing: /var/run/dpdk/spdk2 00:43:25.719 Removing: /var/run/dpdk/spdk3 00:43:25.719 Removing: /var/run/dpdk/spdk4 00:43:25.719 Removing: /var/run/dpdk/spdk_pid100146 00:43:25.719 Removing: /var/run/dpdk/spdk_pid100283 00:43:25.719 Removing: /var/run/dpdk/spdk_pid100999 00:43:25.719 Removing: /var/run/dpdk/spdk_pid101009 00:43:25.719 Removing: /var/run/dpdk/spdk_pid101269 00:43:25.719 Removing: /var/run/dpdk/spdk_pid102587 00:43:25.719 Removing: /var/run/dpdk/spdk_pid103508 00:43:25.719 Removing: /var/run/dpdk/spdk_pid103824 00:43:25.719 Removing: /var/run/dpdk/spdk_pid104023 00:43:25.719 Removing: /var/run/dpdk/spdk_pid104234 00:43:25.719 Removing: /var/run/dpdk/spdk_pid104430 00:43:25.719 Removing: /var/run/dpdk/spdk_pid104591 00:43:25.719 Removing: /var/run/dpdk/spdk_pid104750 00:43:25.719 Removing: /var/run/dpdk/spdk_pid105053 00:43:25.719 Removing: /var/run/dpdk/spdk_pid105247 00:43:25.719 Removing: /var/run/dpdk/spdk_pid107736 00:43:25.719 Removing: /var/run/dpdk/spdk_pid107901 00:43:25.719 Removing: /var/run/dpdk/spdk_pid108061 00:43:25.719 Removing: /var/run/dpdk/spdk_pid108070 00:43:25.719 Removing: /var/run/dpdk/spdk_pid108368 00:43:25.719 Removing: /var/run/dpdk/spdk_pid108496 00:43:25.719 Removing: /var/run/dpdk/spdk_pid108797 00:43:25.719 Removing: /var/run/dpdk/spdk_pid108803 00:43:25.719 Removing: /var/run/dpdk/spdk_pid109092 00:43:25.719 Removing: /var/run/dpdk/spdk_pid109103 00:43:25.719 Removing: /var/run/dpdk/spdk_pid109271 00:43:25.719 Removing: /var/run/dpdk/spdk_pid109368 00:43:25.719 Removing: /var/run/dpdk/spdk_pid109775 00:43:25.719 Removing: /var/run/dpdk/spdk_pid109932 00:43:25.719 Removing: /var/run/dpdk/spdk_pid110132 00:43:25.719 Removing: /var/run/dpdk/spdk_pid112369 00:43:25.719 Removing: /var/run/dpdk/spdk_pid115009 00:43:25.977 Removing: /var/run/dpdk/spdk_pid122616 00:43:25.977 Removing: /var/run/dpdk/spdk_pid123029 00:43:25.977 Removing: /var/run/dpdk/spdk_pid125549 00:43:25.977 Removing: /var/run/dpdk/spdk_pid125831 00:43:25.977 Removing: /var/run/dpdk/spdk_pid128368 00:43:25.977 Removing: /var/run/dpdk/spdk_pid132139 00:43:25.977 Removing: /var/run/dpdk/spdk_pid134283 00:43:25.977 Removing: /var/run/dpdk/spdk_pid140700 00:43:25.977 Removing: /var/run/dpdk/spdk_pid145992 00:43:25.977 Removing: /var/run/dpdk/spdk_pid147187 00:43:25.977 Removing: /var/run/dpdk/spdk_pid147862 00:43:25.977 Removing: /var/run/dpdk/spdk_pid158848 00:43:25.977 Removing: /var/run/dpdk/spdk_pid161129 00:43:25.977 Removing: /var/run/dpdk/spdk_pid217573 00:43:25.977 Removing: /var/run/dpdk/spdk_pid220883 00:43:25.977 Removing: /var/run/dpdk/spdk_pid224712 00:43:25.977 Removing: /var/run/dpdk/spdk_pid228857 00:43:25.977 Removing: /var/run/dpdk/spdk_pid228972 00:43:25.977 Removing: /var/run/dpdk/spdk_pid229513 00:43:25.977 Removing: /var/run/dpdk/spdk_pid230166 00:43:25.977 Removing: /var/run/dpdk/spdk_pid230815 00:43:25.977 Removing: /var/run/dpdk/spdk_pid231220 00:43:25.977 Removing: /var/run/dpdk/spdk_pid231227 00:43:25.977 Removing: /var/run/dpdk/spdk_pid231369 00:43:25.977 Removing: /var/run/dpdk/spdk_pid231508 00:43:25.977 Removing: /var/run/dpdk/spdk_pid231519 00:43:25.977 Removing: /var/run/dpdk/spdk_pid232163 00:43:25.977 Removing: /var/run/dpdk/spdk_pid232827 00:43:25.977 Removing: /var/run/dpdk/spdk_pid233487 00:43:25.977 Removing: /var/run/dpdk/spdk_pid233897 00:43:25.977 Removing: /var/run/dpdk/spdk_pid233996 00:43:25.977 Removing: /var/run/dpdk/spdk_pid234163 00:43:25.977 Removing: /var/run/dpdk/spdk_pid235057 00:43:25.977 Removing: /var/run/dpdk/spdk_pid235799 00:43:25.977 Removing: /var/run/dpdk/spdk_pid241223 00:43:25.977 Removing: /var/run/dpdk/spdk_pid270068 00:43:25.977 Removing: /var/run/dpdk/spdk_pid272964 00:43:25.977 Removing: /var/run/dpdk/spdk_pid274144 00:43:25.977 Removing: /var/run/dpdk/spdk_pid275476 00:43:25.977 Removing: /var/run/dpdk/spdk_pid275622 00:43:25.977 Removing: /var/run/dpdk/spdk_pid275766 00:43:25.977 Removing: /var/run/dpdk/spdk_pid275904 00:43:25.977 Removing: /var/run/dpdk/spdk_pid276357 00:43:25.977 Removing: /var/run/dpdk/spdk_pid277673 00:43:25.977 Removing: /var/run/dpdk/spdk_pid278467 00:43:25.977 Removing: /var/run/dpdk/spdk_pid278835 00:43:25.977 Removing: /var/run/dpdk/spdk_pid280459 00:43:25.977 Removing: /var/run/dpdk/spdk_pid280876 00:43:25.977 Removing: /var/run/dpdk/spdk_pid281323 00:43:25.977 Removing: /var/run/dpdk/spdk_pid283709 00:43:25.977 Removing: /var/run/dpdk/spdk_pid287080 00:43:25.977 Removing: /var/run/dpdk/spdk_pid287083 00:43:25.977 Removing: /var/run/dpdk/spdk_pid287085 00:43:25.977 Removing: /var/run/dpdk/spdk_pid289214 00:43:25.977 Removing: /var/run/dpdk/spdk_pid291421 00:43:25.977 Removing: /var/run/dpdk/spdk_pid295212 00:43:25.977 Removing: /var/run/dpdk/spdk_pid318341 00:43:25.977 Removing: /var/run/dpdk/spdk_pid321053 00:43:25.977 Removing: /var/run/dpdk/spdk_pid325583 00:43:25.977 Removing: /var/run/dpdk/spdk_pid326526 00:43:25.977 Removing: /var/run/dpdk/spdk_pid327487 00:43:25.977 Removing: /var/run/dpdk/spdk_pid328575 00:43:25.977 Removing: /var/run/dpdk/spdk_pid331333 00:43:25.977 Removing: /var/run/dpdk/spdk_pid333917 00:43:25.977 Removing: /var/run/dpdk/spdk_pid336156 00:43:25.977 Removing: /var/run/dpdk/spdk_pid340386 00:43:25.977 Removing: /var/run/dpdk/spdk_pid340512 00:43:25.977 Removing: /var/run/dpdk/spdk_pid343300 00:43:25.977 Removing: /var/run/dpdk/spdk_pid343441 00:43:25.977 Removing: /var/run/dpdk/spdk_pid343586 00:43:25.977 Removing: /var/run/dpdk/spdk_pid343956 00:43:25.977 Removing: /var/run/dpdk/spdk_pid343968 00:43:25.977 Removing: /var/run/dpdk/spdk_pid345042 00:43:25.977 Removing: /var/run/dpdk/spdk_pid346216 00:43:25.977 Removing: /var/run/dpdk/spdk_pid347393 00:43:25.977 Removing: /var/run/dpdk/spdk_pid348568 00:43:25.977 Removing: /var/run/dpdk/spdk_pid349834 00:43:25.977 Removing: /var/run/dpdk/spdk_pid351043 00:43:25.977 Removing: /var/run/dpdk/spdk_pid354858 00:43:25.977 Removing: /var/run/dpdk/spdk_pid355200 00:43:25.977 Removing: /var/run/dpdk/spdk_pid357210 00:43:25.977 Removing: /var/run/dpdk/spdk_pid357947 00:43:25.977 Removing: /var/run/dpdk/spdk_pid361667 00:43:25.977 Removing: /var/run/dpdk/spdk_pid363589 00:43:25.977 Removing: /var/run/dpdk/spdk_pid367055 00:43:25.977 Removing: /var/run/dpdk/spdk_pid370394 00:43:25.977 Removing: /var/run/dpdk/spdk_pid376877 00:43:25.977 Removing: /var/run/dpdk/spdk_pid381354 00:43:25.977 Removing: /var/run/dpdk/spdk_pid381358 00:43:25.977 Removing: /var/run/dpdk/spdk_pid394739 00:43:25.977 Removing: /var/run/dpdk/spdk_pid395152 00:43:25.977 Removing: /var/run/dpdk/spdk_pid395677 00:43:25.978 Removing: /var/run/dpdk/spdk_pid396081 00:43:25.978 Removing: /var/run/dpdk/spdk_pid396663 00:43:25.978 Removing: /var/run/dpdk/spdk_pid397065 00:43:25.978 Removing: /var/run/dpdk/spdk_pid397494 00:43:25.978 Removing: /var/run/dpdk/spdk_pid398003 00:43:25.978 Removing: /var/run/dpdk/spdk_pid400388 00:43:26.235 Removing: /var/run/dpdk/spdk_pid400653 00:43:26.235 Removing: /var/run/dpdk/spdk_pid404443 00:43:26.235 Removing: /var/run/dpdk/spdk_pid404494 00:43:26.235 Removing: /var/run/dpdk/spdk_pid407854 00:43:26.235 Removing: /var/run/dpdk/spdk_pid410455 00:43:26.235 Removing: /var/run/dpdk/spdk_pid417384 00:43:26.235 Removing: /var/run/dpdk/spdk_pid417779 00:43:26.235 Removing: /var/run/dpdk/spdk_pid420290 00:43:26.235 Removing: /var/run/dpdk/spdk_pid420556 00:43:26.235 Removing: /var/run/dpdk/spdk_pid423124 00:43:26.235 Removing: /var/run/dpdk/spdk_pid427372 00:43:26.235 Removing: /var/run/dpdk/spdk_pid429515 00:43:26.235 Removing: /var/run/dpdk/spdk_pid435770 00:43:26.235 Removing: /var/run/dpdk/spdk_pid440968 00:43:26.235 Removing: /var/run/dpdk/spdk_pid442269 00:43:26.235 Removing: /var/run/dpdk/spdk_pid442926 00:43:26.235 Removing: /var/run/dpdk/spdk_pid453086 00:43:26.235 Removing: /var/run/dpdk/spdk_pid455312 00:43:26.235 Removing: /var/run/dpdk/spdk_pid457216 00:43:26.235 Removing: /var/run/dpdk/spdk_pid462888 00:43:26.235 Removing: /var/run/dpdk/spdk_pid463004 00:43:26.235 Removing: /var/run/dpdk/spdk_pid465908 00:43:26.235 Removing: /var/run/dpdk/spdk_pid467197 00:43:26.235 Removing: /var/run/dpdk/spdk_pid468586 00:43:26.235 Removing: /var/run/dpdk/spdk_pid469445 00:43:26.235 Removing: /var/run/dpdk/spdk_pid470843 00:43:26.235 Removing: /var/run/dpdk/spdk_pid471716 00:43:26.235 Removing: /var/run/dpdk/spdk_pid477017 00:43:26.235 Removing: /var/run/dpdk/spdk_pid477389 00:43:26.235 Removing: /var/run/dpdk/spdk_pid477784 00:43:26.235 Removing: /var/run/dpdk/spdk_pid479340 00:43:26.235 Removing: /var/run/dpdk/spdk_pid479732 00:43:26.236 Removing: /var/run/dpdk/spdk_pid480010 00:43:26.236 Removing: /var/run/dpdk/spdk_pid482456 00:43:26.236 Removing: /var/run/dpdk/spdk_pid482473 00:43:26.236 Removing: /var/run/dpdk/spdk_pid483935 00:43:26.236 Removing: /var/run/dpdk/spdk_pid484342 00:43:26.236 Removing: /var/run/dpdk/spdk_pid484424 00:43:26.236 Removing: /var/run/dpdk/spdk_pid97441 00:43:26.236 Removing: /var/run/dpdk/spdk_pid98185 00:43:26.236 Removing: /var/run/dpdk/spdk_pid99129 00:43:26.236 Removing: /var/run/dpdk/spdk_pid99459 00:43:26.236 Clean 00:43:26.236 20:44:38 -- common/autotest_common.sh@1453 -- # return 0 00:43:26.236 20:44:38 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:43:26.236 20:44:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:26.236 20:44:38 -- common/autotest_common.sh@10 -- # set +x 00:43:26.236 20:44:38 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:43:26.236 20:44:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:26.236 20:44:38 -- common/autotest_common.sh@10 -- # set +x 00:43:26.236 20:44:38 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:26.236 20:44:38 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:26.236 20:44:38 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:26.236 20:44:38 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:43:26.236 20:44:38 -- spdk/autotest.sh@398 -- # hostname 00:43:26.236 20:44:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:26.494 geninfo: WARNING: invalid characters removed from testname! 00:43:58.553 20:45:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:01.886 20:45:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:04.413 20:45:16 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:07.694 20:45:19 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:10.973 20:45:22 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:13.500 20:45:25 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:16.783 20:45:28 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:16.783 20:45:28 -- spdk/autorun.sh@1 -- $ timing_finish 00:44:16.783 20:45:28 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:44:16.783 20:45:28 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:16.783 20:45:28 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:16.783 20:45:28 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:16.783 + [[ -n 6053 ]] 00:44:16.783 + sudo kill 6053 00:44:16.794 [Pipeline] } 00:44:16.809 [Pipeline] // stage 00:44:16.815 [Pipeline] } 00:44:16.829 [Pipeline] // timeout 00:44:16.835 [Pipeline] } 00:44:16.849 [Pipeline] // catchError 00:44:16.854 [Pipeline] } 00:44:16.869 [Pipeline] // wrap 00:44:16.876 [Pipeline] } 00:44:16.889 [Pipeline] // catchError 00:44:16.899 [Pipeline] stage 00:44:16.901 [Pipeline] { (Epilogue) 00:44:16.914 [Pipeline] catchError 00:44:16.916 [Pipeline] { 00:44:16.929 [Pipeline] echo 00:44:16.930 Cleanup processes 00:44:16.936 [Pipeline] sh 00:44:17.225 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:17.225 497357 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:17.239 [Pipeline] sh 00:44:17.525 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:17.525 ++ grep -v 'sudo pgrep' 00:44:17.525 ++ awk '{print $1}' 00:44:17.525 + sudo kill -9 00:44:17.525 + true 00:44:17.537 [Pipeline] sh 00:44:17.819 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:30.030 [Pipeline] sh 00:44:30.322 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:30.322 Artifacts sizes are good 00:44:30.340 [Pipeline] archiveArtifacts 00:44:30.348 Archiving artifacts 00:44:30.807 [Pipeline] sh 00:44:31.116 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:31.131 [Pipeline] cleanWs 00:44:31.142 [WS-CLEANUP] Deleting project workspace... 00:44:31.142 [WS-CLEANUP] Deferred wipeout is used... 00:44:31.149 [WS-CLEANUP] done 00:44:31.151 [Pipeline] } 00:44:31.168 [Pipeline] // catchError 00:44:31.180 [Pipeline] sh 00:44:31.464 + logger -p user.info -t JENKINS-CI 00:44:31.472 [Pipeline] } 00:44:31.486 [Pipeline] // stage 00:44:31.491 [Pipeline] } 00:44:31.508 [Pipeline] // node 00:44:31.513 [Pipeline] End of Pipeline 00:44:31.551 Finished: SUCCESS